00:00:00.000 Started by upstream project "autotest-per-patch" build number 132065 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:03.280 The recommended git tool is: git 00:00:03.280 using credential 00000000-0000-0000-0000-000000000002 00:00:03.281 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.294 Fetching changes from the remote Git repository 00:00:03.296 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.310 Using shallow fetch with depth 1 00:00:03.310 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.310 > git --version # timeout=10 00:00:03.321 > git --version # 'git version 2.39.2' 00:00:03.321 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.832 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.844 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.857 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:08.857 > git config core.sparsecheckout # timeout=10 00:00:08.868 > git read-tree -mu HEAD # timeout=10 00:00:08.885 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:08.903 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:08.903 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.983 [Pipeline] Start of Pipeline 00:00:08.997 [Pipeline] library 00:00:08.998 Loading library shm_lib@master 00:00:08.998 Library shm_lib@master is cached. Copying from home. 00:00:09.012 [Pipeline] node 00:00:09.069 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.071 [Pipeline] { 00:00:09.082 [Pipeline] catchError 00:00:09.083 [Pipeline] { 00:00:09.098 [Pipeline] wrap 00:00:09.107 [Pipeline] { 00:00:09.116 [Pipeline] stage 00:00:09.119 [Pipeline] { (Prologue) 00:00:09.318 [Pipeline] sh 00:00:09.653 + logger -p user.info -t JENKINS-CI 00:00:09.671 [Pipeline] echo 00:00:09.672 Node: CYP9 00:00:09.679 [Pipeline] sh 00:00:09.989 [Pipeline] setCustomBuildProperty 00:00:10.001 [Pipeline] echo 00:00:10.003 Cleanup processes 00:00:10.008 [Pipeline] sh 00:00:10.297 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.297 2845360 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.311 [Pipeline] sh 00:00:10.603 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.603 ++ grep -v 'sudo pgrep' 00:00:10.603 ++ awk '{print $1}' 00:00:10.603 + sudo kill -9 00:00:10.603 + true 00:00:10.617 [Pipeline] cleanWs 00:00:10.626 [WS-CLEANUP] Deleting project workspace... 00:00:10.626 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.633 [WS-CLEANUP] done 00:00:10.637 [Pipeline] setCustomBuildProperty 00:00:10.649 [Pipeline] sh 00:00:10.935 + sudo git config --global --replace-all safe.directory '*' 00:00:11.024 [Pipeline] httpRequest 00:00:11.659 [Pipeline] echo 00:00:11.660 Sorcerer 10.211.164.101 is alive 00:00:11.667 [Pipeline] retry 00:00:11.668 [Pipeline] { 00:00:11.676 [Pipeline] httpRequest 00:00:11.680 HttpMethod: GET 00:00:11.681 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.681 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.706 Response Code: HTTP/1.1 200 OK 00:00:11.706 Success: Status code 200 is in the accepted range: 200,404 00:00:11.706 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:33.810 [Pipeline] } 00:00:33.827 [Pipeline] // retry 00:00:33.834 [Pipeline] sh 00:00:34.123 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:34.140 [Pipeline] httpRequest 00:00:34.574 [Pipeline] echo 00:00:34.575 Sorcerer 10.211.164.101 is alive 00:00:34.585 [Pipeline] retry 00:00:34.587 [Pipeline] { 00:00:34.601 [Pipeline] httpRequest 00:00:34.606 HttpMethod: GET 00:00:34.606 URL: http://10.211.164.101/packages/spdk_dbbc706e01281013d3e228230628a29ba2fcb376.tar.gz 00:00:34.607 Sending request to url: http://10.211.164.101/packages/spdk_dbbc706e01281013d3e228230628a29ba2fcb376.tar.gz 00:00:34.613 Response Code: HTTP/1.1 200 OK 00:00:34.613 Success: Status code 200 is in the accepted range: 200,404 00:00:34.614 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbbc706e01281013d3e228230628a29ba2fcb376.tar.gz 00:07:19.519 [Pipeline] } 00:07:19.537 [Pipeline] // retry 00:07:19.545 [Pipeline] sh 00:07:19.835 + tar --no-same-owner -xf spdk_dbbc706e01281013d3e228230628a29ba2fcb376.tar.gz 00:07:23.185 [Pipeline] sh 00:07:23.470 + git -C spdk log --oneline -n5 00:07:23.470 dbbc706e0 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:07:23.470 ea915c2d7 test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:07:23.470 8d6df385e test/nvmf: Prepare replacements for the network setup 00:07:23.470 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:07:23.470 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:07:23.481 [Pipeline] } 00:07:23.494 [Pipeline] // stage 00:07:23.502 [Pipeline] stage 00:07:23.504 [Pipeline] { (Prepare) 00:07:23.520 [Pipeline] writeFile 00:07:23.534 [Pipeline] sh 00:07:23.820 + logger -p user.info -t JENKINS-CI 00:07:23.834 [Pipeline] sh 00:07:24.122 + logger -p user.info -t JENKINS-CI 00:07:24.135 [Pipeline] sh 00:07:24.423 + cat autorun-spdk.conf 00:07:24.424 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:24.424 SPDK_TEST_NVMF=1 00:07:24.424 SPDK_TEST_NVME_CLI=1 00:07:24.424 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:24.424 SPDK_TEST_NVMF_NICS=e810 00:07:24.424 SPDK_TEST_VFIOUSER=1 00:07:24.424 SPDK_RUN_UBSAN=1 00:07:24.424 NET_TYPE=phy 00:07:24.433 RUN_NIGHTLY=0 00:07:24.437 [Pipeline] readFile 00:07:24.460 [Pipeline] withEnv 00:07:24.462 [Pipeline] { 00:07:24.475 [Pipeline] sh 00:07:24.764 + set -ex 00:07:24.764 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:07:24.764 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:24.764 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:24.764 ++ SPDK_TEST_NVMF=1 00:07:24.764 ++ SPDK_TEST_NVME_CLI=1 00:07:24.764 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:24.764 ++ SPDK_TEST_NVMF_NICS=e810 00:07:24.764 ++ SPDK_TEST_VFIOUSER=1 00:07:24.764 ++ SPDK_RUN_UBSAN=1 00:07:24.764 ++ NET_TYPE=phy 00:07:24.764 ++ RUN_NIGHTLY=0 00:07:24.764 + case $SPDK_TEST_NVMF_NICS in 00:07:24.764 + DRIVERS=ice 00:07:24.764 + [[ tcp == \r\d\m\a ]] 00:07:24.764 + [[ -n ice ]] 00:07:24.764 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:07:24.764 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:07:24.764 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:07:24.764 rmmod: ERROR: Module irdma is not currently loaded 00:07:24.764 rmmod: ERROR: Module i40iw is not currently loaded 00:07:24.764 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:07:24.764 + true 00:07:24.764 + for D in $DRIVERS 00:07:24.764 + sudo modprobe ice 00:07:24.764 + exit 0 00:07:24.774 [Pipeline] } 00:07:24.789 [Pipeline] // withEnv 00:07:24.795 [Pipeline] } 00:07:24.808 [Pipeline] // stage 00:07:24.817 [Pipeline] catchError 00:07:24.819 [Pipeline] { 00:07:24.833 [Pipeline] timeout 00:07:24.833 Timeout set to expire in 1 hr 0 min 00:07:24.835 [Pipeline] { 00:07:24.848 [Pipeline] stage 00:07:24.850 [Pipeline] { (Tests) 00:07:24.863 [Pipeline] sh 00:07:25.151 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:25.151 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:25.151 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:25.151 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:07:25.151 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:25.151 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:25.151 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:07:25.151 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:25.151 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:25.151 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:25.151 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:07:25.151 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:25.151 + source /etc/os-release 00:07:25.151 ++ NAME='Fedora Linux' 00:07:25.151 ++ VERSION='39 (Cloud Edition)' 00:07:25.151 ++ ID=fedora 00:07:25.151 ++ VERSION_ID=39 00:07:25.151 ++ VERSION_CODENAME= 00:07:25.151 ++ PLATFORM_ID=platform:f39 00:07:25.151 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:25.151 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:25.151 ++ LOGO=fedora-logo-icon 00:07:25.151 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:25.151 ++ HOME_URL=https://fedoraproject.org/ 00:07:25.151 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:25.151 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:25.151 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:25.151 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:25.151 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:25.151 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:25.151 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:25.151 ++ SUPPORT_END=2024-11-12 00:07:25.151 ++ VARIANT='Cloud Edition' 00:07:25.151 ++ VARIANT_ID=cloud 00:07:25.151 + uname -a 00:07:25.151 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:25.151 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:27.697 Hugepages 00:07:27.697 node hugesize free / total 00:07:27.697 node0 1048576kB 0 / 0 00:07:27.697 node0 2048kB 0 / 0 00:07:27.697 node1 1048576kB 0 / 0 00:07:27.959 node1 2048kB 0 / 0 00:07:27.959 00:07:27.959 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:27.959 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:07:27.959 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:07:27.959 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:07:27.959 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:07:27.959 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:07:27.959 + rm -f /tmp/spdk-ld-path 00:07:27.959 + source autorun-spdk.conf 00:07:27.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:27.959 ++ SPDK_TEST_NVMF=1 00:07:27.959 ++ SPDK_TEST_NVME_CLI=1 00:07:27.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:27.959 ++ SPDK_TEST_NVMF_NICS=e810 00:07:27.959 ++ SPDK_TEST_VFIOUSER=1 00:07:27.959 ++ SPDK_RUN_UBSAN=1 00:07:27.959 ++ NET_TYPE=phy 00:07:27.959 ++ RUN_NIGHTLY=0 00:07:27.959 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:27.959 + [[ -n '' ]] 00:07:27.959 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:27.960 + for M in /var/spdk/build-*-manifest.txt 00:07:27.960 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:27.960 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:27.960 + for M in /var/spdk/build-*-manifest.txt 00:07:27.960 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:27.960 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:27.960 + for M in /var/spdk/build-*-manifest.txt 00:07:27.960 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:27.960 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:27.960 ++ uname 00:07:27.960 + [[ Linux == \L\i\n\u\x ]] 00:07:27.960 + sudo dmesg -T 00:07:28.237 + sudo dmesg --clear 00:07:28.237 + dmesg_pid=2847531 00:07:28.237 + [[ Fedora Linux == FreeBSD ]] 00:07:28.237 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:28.237 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:28.237 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:28.237 + [[ -x /usr/src/fio-static/fio ]] 00:07:28.237 + export FIO_BIN=/usr/src/fio-static/fio 00:07:28.237 + FIO_BIN=/usr/src/fio-static/fio 00:07:28.237 + sudo dmesg -Tw 00:07:28.237 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:28.237 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:28.237 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:28.237 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:28.237 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:28.237 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:28.237 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:28.237 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:28.237 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:28.237 16:31:35 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:07:28.237 16:31:35 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:28.237 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:07:28.238 16:31:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:07:28.238 16:31:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:28.238 16:31:35 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:28.238 16:31:35 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:07:28.238 16:31:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.238 16:31:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:28.238 16:31:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:28.238 16:31:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.238 16:31:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.238 16:31:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.238 16:31:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.238 16:31:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.238 16:31:35 -- paths/export.sh@5 -- $ export PATH 00:07:28.238 16:31:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.238 16:31:35 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:28.238 16:31:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:07:28.238 16:31:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730820695.XXXXXX 00:07:28.238 16:31:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730820695.9H5als 00:07:28.238 16:31:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:07:28.238 16:31:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:07:28.238 16:31:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:07:28.238 16:31:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:28.238 16:31:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:28.238 16:31:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:07:28.238 16:31:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:07:28.238 16:31:35 -- common/autotest_common.sh@10 -- $ set +x 00:07:28.238 16:31:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:28.238 16:31:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:07:28.238 16:31:35 -- pm/common@17 -- $ local monitor 00:07:28.238 16:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:28.238 16:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:28.238 16:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:28.238 16:31:35 -- pm/common@21 -- $ date +%s 00:07:28.238 16:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:28.238 16:31:35 -- pm/common@25 -- $ sleep 1 00:07:28.238 16:31:35 -- pm/common@21 -- $ date +%s 00:07:28.238 16:31:35 -- pm/common@21 -- $ date +%s 00:07:28.238 16:31:35 -- pm/common@21 -- $ date +%s 00:07:28.238 16:31:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820695 00:07:28.238 16:31:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820695 00:07:28.238 16:31:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820695 00:07:28.238 16:31:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820695 00:07:28.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820695_collect-cpu-load.pm.log 00:07:28.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820695_collect-vmstat.pm.log 00:07:28.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820695_collect-cpu-temp.pm.log 00:07:28.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820695_collect-bmc-pm.bmc.pm.log 00:07:29.442 16:31:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:07:29.442 16:31:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:29.442 16:31:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:29.442 16:31:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:29.442 16:31:36 -- spdk/autobuild.sh@16 -- $ date -u 00:07:29.442 Tue Nov 5 03:31:36 PM UTC 2024 00:07:29.442 16:31:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:29.442 v25.01-pre-161-gdbbc706e0 00:07:29.442 16:31:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:29.442 16:31:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:29.442 16:31:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:29.442 16:31:36 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:07:29.442 16:31:36 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:29.442 16:31:36 -- common/autotest_common.sh@10 -- $ set +x 00:07:29.442 ************************************ 00:07:29.442 START TEST ubsan 00:07:29.442 ************************************ 00:07:29.442 16:31:36 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:07:29.442 using ubsan 00:07:29.442 00:07:29.443 real 0m0.001s 00:07:29.443 user 0m0.001s 00:07:29.443 sys 0m0.000s 00:07:29.443 16:31:36 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:29.443 16:31:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:29.443 ************************************ 00:07:29.443 END TEST ubsan 00:07:29.443 ************************************ 00:07:29.443 16:31:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:29.443 16:31:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:29.443 16:31:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:29.443 16:31:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:29.443 16:31:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:29.443 16:31:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:29.443 16:31:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:29.443 16:31:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:29.443 16:31:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:07:29.705 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:29.705 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:29.966 Using 'verbs' RDMA provider 00:07:45.884 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:58.127 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:58.127 Creating mk/config.mk...done. 00:07:58.127 Creating mk/cc.flags.mk...done. 00:07:58.127 Type 'make' to build. 00:07:58.127 16:32:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:07:58.127 16:32:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:07:58.127 16:32:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:58.127 16:32:04 -- common/autotest_common.sh@10 -- $ set +x 00:07:58.127 ************************************ 00:07:58.127 START TEST make 00:07:58.127 ************************************ 00:07:58.127 16:32:04 make -- common/autotest_common.sh@1127 -- $ make -j144 00:07:58.127 make[1]: Nothing to be done for 'all'. 00:07:59.510 The Meson build system 00:07:59.510 Version: 1.5.0 00:07:59.510 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:07:59.510 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:59.510 Build type: native build 00:07:59.510 Project name: libvfio-user 00:07:59.510 Project version: 0.0.1 00:07:59.510 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:59.510 C linker for the host machine: cc ld.bfd 2.40-14 00:07:59.510 Host machine cpu family: x86_64 00:07:59.510 Host machine cpu: x86_64 00:07:59.511 Run-time dependency threads found: YES 00:07:59.511 Library dl found: YES 00:07:59.511 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:59.511 Run-time dependency json-c found: YES 0.17 00:07:59.511 Run-time dependency cmocka found: YES 1.1.7 00:07:59.511 Program pytest-3 found: NO 00:07:59.511 Program flake8 found: NO 00:07:59.511 Program misspell-fixer found: NO 00:07:59.511 Program restructuredtext-lint found: NO 00:07:59.511 Program valgrind found: YES (/usr/bin/valgrind) 00:07:59.511 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:59.511 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:59.511 Compiler for C supports arguments -Wwrite-strings: YES 00:07:59.511 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:59.511 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:07:59.511 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:07:59.511 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:59.511 Build targets in project: 8 00:07:59.511 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:59.511 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:59.511 00:07:59.511 libvfio-user 0.0.1 00:07:59.511 00:07:59.511 User defined options 00:07:59.511 buildtype : debug 00:07:59.511 default_library: shared 00:07:59.511 libdir : /usr/local/lib 00:07:59.511 00:07:59.511 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:59.771 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:59.771 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:07:59.771 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:07:59.771 [3/37] Compiling C object samples/null.p/null.c.o 00:07:59.771 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:07:59.771 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:07:59.771 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:59.771 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:07:59.771 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:59.771 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:59.771 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:59.771 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:59.771 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:07:59.771 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:07:59.771 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:07:59.771 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:59.771 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:59.771 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:59.771 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:59.771 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:59.771 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:59.771 [21/37] Compiling C object samples/server.p/server.c.o 00:07:59.771 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:07:59.771 [23/37] Compiling C object samples/client.p/client.c.o 00:07:59.771 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:59.771 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:59.771 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:59.771 [27/37] Linking target samples/client 00:08:00.032 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:08:00.032 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:08:00.032 [30/37] Linking target test/unit_tests 00:08:00.032 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:08:00.032 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:08:00.291 [33/37] Linking target samples/server 00:08:00.291 [34/37] Linking target samples/null 00:08:00.292 [35/37] Linking target samples/lspci 00:08:00.292 [36/37] Linking target samples/gpio-pci-idio-16 00:08:00.292 [37/37] Linking target samples/shadow_ioeventfd_server 00:08:00.292 INFO: autodetecting backend as ninja 00:08:00.292 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:00.292 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:00.552 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:08:00.552 ninja: no work to do. 00:08:07.142 The Meson build system 00:08:07.142 Version: 1.5.0 00:08:07.142 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:08:07.142 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:08:07.142 Build type: native build 00:08:07.142 Program cat found: YES (/usr/bin/cat) 00:08:07.142 Project name: DPDK 00:08:07.142 Project version: 24.03.0 00:08:07.142 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:07.142 C linker for the host machine: cc ld.bfd 2.40-14 00:08:07.142 Host machine cpu family: x86_64 00:08:07.142 Host machine cpu: x86_64 00:08:07.142 Message: ## Building in Developer Mode ## 00:08:07.142 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:07.142 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:08:07.142 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:07.142 Program python3 found: YES (/usr/bin/python3) 00:08:07.142 Program cat found: YES (/usr/bin/cat) 00:08:07.142 Compiler for C supports arguments -march=native: YES 00:08:07.142 Checking for size of "void *" : 8 00:08:07.142 Checking for size of "void *" : 8 (cached) 00:08:07.142 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:07.142 Library m found: YES 00:08:07.142 Library numa found: YES 00:08:07.142 Has header "numaif.h" : YES 00:08:07.142 Library fdt found: NO 00:08:07.142 Library execinfo found: NO 00:08:07.142 Has header "execinfo.h" : YES 00:08:07.142 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:07.142 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:07.142 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:07.142 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:07.142 Run-time dependency openssl found: YES 3.1.1 00:08:07.142 Run-time dependency libpcap found: YES 1.10.4 00:08:07.142 Has header "pcap.h" with dependency libpcap: YES 00:08:07.142 Compiler for C supports arguments -Wcast-qual: YES 00:08:07.142 Compiler for C supports arguments -Wdeprecated: YES 00:08:07.142 Compiler for C supports arguments -Wformat: YES 00:08:07.142 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:07.142 Compiler for C supports arguments -Wformat-security: NO 00:08:07.142 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:07.142 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:07.142 Compiler for C supports arguments -Wnested-externs: YES 00:08:07.142 Compiler for C supports arguments -Wold-style-definition: YES 00:08:07.142 Compiler for C supports arguments -Wpointer-arith: YES 00:08:07.142 Compiler for C supports arguments -Wsign-compare: YES 00:08:07.142 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:07.142 Compiler for C supports arguments -Wundef: YES 00:08:07.142 Compiler for C supports arguments -Wwrite-strings: YES 00:08:07.142 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:07.142 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:07.142 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:07.142 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:07.142 Program objdump found: YES (/usr/bin/objdump) 00:08:07.142 Compiler for C supports arguments -mavx512f: YES 00:08:07.142 Checking if "AVX512 checking" compiles: YES 00:08:07.142 Fetching value of define "__SSE4_2__" : 1 00:08:07.142 Fetching value of define "__AES__" : 1 00:08:07.142 Fetching value of define "__AVX__" : 1 00:08:07.142 Fetching value of define "__AVX2__" : 1 00:08:07.142 Fetching value of define "__AVX512BW__" : 1 00:08:07.142 Fetching value of define "__AVX512CD__" : 1 00:08:07.142 Fetching value of define "__AVX512DQ__" : 1 00:08:07.142 Fetching value of define "__AVX512F__" : 1 00:08:07.142 Fetching value of define "__AVX512VL__" : 1 00:08:07.142 Fetching value of define "__PCLMUL__" : 1 00:08:07.142 Fetching value of define "__RDRND__" : 1 00:08:07.142 Fetching value of define "__RDSEED__" : 1 00:08:07.142 Fetching value of define "__VPCLMULQDQ__" : 1 00:08:07.142 Fetching value of define "__znver1__" : (undefined) 00:08:07.142 Fetching value of define "__znver2__" : (undefined) 00:08:07.142 Fetching value of define "__znver3__" : (undefined) 00:08:07.142 Fetching value of define "__znver4__" : (undefined) 00:08:07.142 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:07.142 Message: lib/log: Defining dependency "log" 00:08:07.142 Message: lib/kvargs: Defining dependency "kvargs" 00:08:07.142 Message: lib/telemetry: Defining dependency "telemetry" 00:08:07.142 Checking for function "getentropy" : NO 00:08:07.142 Message: lib/eal: Defining dependency "eal" 00:08:07.142 Message: lib/ring: Defining dependency "ring" 00:08:07.142 Message: lib/rcu: Defining dependency "rcu" 00:08:07.142 Message: lib/mempool: Defining dependency "mempool" 00:08:07.142 Message: lib/mbuf: Defining dependency "mbuf" 00:08:07.142 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:07.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:07.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:07.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:07.142 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:07.142 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:08:07.142 Compiler for C supports arguments -mpclmul: YES 00:08:07.142 Compiler for C supports arguments -maes: YES 00:08:07.142 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:07.142 Compiler for C supports arguments -mavx512bw: YES 00:08:07.142 Compiler for C supports arguments -mavx512dq: YES 00:08:07.142 Compiler for C supports arguments -mavx512vl: YES 00:08:07.142 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:07.142 Compiler for C supports arguments -mavx2: YES 00:08:07.142 Compiler for C supports arguments -mavx: YES 00:08:07.142 Message: lib/net: Defining dependency "net" 00:08:07.142 Message: lib/meter: Defining dependency "meter" 00:08:07.142 Message: lib/ethdev: Defining dependency "ethdev" 00:08:07.142 Message: lib/pci: Defining dependency "pci" 00:08:07.142 Message: lib/cmdline: Defining dependency "cmdline" 00:08:07.142 Message: lib/hash: Defining dependency "hash" 00:08:07.142 Message: lib/timer: Defining dependency "timer" 00:08:07.142 Message: lib/compressdev: Defining dependency "compressdev" 00:08:07.142 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:07.142 Message: lib/dmadev: Defining dependency "dmadev" 00:08:07.142 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:07.142 Message: lib/power: Defining dependency "power" 00:08:07.142 Message: lib/reorder: Defining dependency "reorder" 00:08:07.142 Message: lib/security: Defining dependency "security" 00:08:07.142 Has header "linux/userfaultfd.h" : YES 00:08:07.142 Has header "linux/vduse.h" : YES 00:08:07.142 Message: lib/vhost: Defining dependency "vhost" 00:08:07.142 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:07.142 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:07.142 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:07.142 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:07.142 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:07.142 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:07.142 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:07.142 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:07.142 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:07.142 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:07.142 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:07.142 Configuring doxy-api-html.conf using configuration 00:08:07.142 Configuring doxy-api-man.conf using configuration 00:08:07.142 Program mandb found: YES (/usr/bin/mandb) 00:08:07.142 Program sphinx-build found: NO 00:08:07.142 Configuring rte_build_config.h using configuration 00:08:07.142 Message: 00:08:07.142 ================= 00:08:07.142 Applications Enabled 00:08:07.142 ================= 00:08:07.142 00:08:07.143 apps: 00:08:07.143 00:08:07.143 00:08:07.143 Message: 00:08:07.143 ================= 00:08:07.143 Libraries Enabled 00:08:07.143 ================= 00:08:07.143 00:08:07.143 libs: 00:08:07.143 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:07.143 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:07.143 cryptodev, dmadev, power, reorder, security, vhost, 00:08:07.143 00:08:07.143 Message: 00:08:07.143 =============== 00:08:07.143 Drivers Enabled 00:08:07.143 =============== 00:08:07.143 00:08:07.143 common: 00:08:07.143 00:08:07.143 bus: 00:08:07.143 pci, vdev, 00:08:07.143 mempool: 00:08:07.143 ring, 00:08:07.143 dma: 00:08:07.143 00:08:07.143 net: 00:08:07.143 00:08:07.143 crypto: 00:08:07.143 00:08:07.143 compress: 00:08:07.143 00:08:07.143 vdpa: 00:08:07.143 00:08:07.143 00:08:07.143 Message: 00:08:07.143 ================= 00:08:07.143 Content Skipped 00:08:07.143 ================= 00:08:07.143 00:08:07.143 apps: 00:08:07.143 dumpcap: explicitly disabled via build config 00:08:07.143 graph: explicitly disabled via build config 00:08:07.143 pdump: explicitly disabled via build config 00:08:07.143 proc-info: explicitly disabled via build config 00:08:07.143 test-acl: explicitly disabled via build config 00:08:07.143 test-bbdev: explicitly disabled via build config 00:08:07.143 test-cmdline: explicitly disabled via build config 00:08:07.143 test-compress-perf: explicitly disabled via build config 00:08:07.143 test-crypto-perf: explicitly disabled via build config 00:08:07.143 test-dma-perf: explicitly disabled via build config 00:08:07.143 test-eventdev: explicitly disabled via build config 00:08:07.143 test-fib: explicitly disabled via build config 00:08:07.143 test-flow-perf: explicitly disabled via build config 00:08:07.143 test-gpudev: explicitly disabled via build config 00:08:07.143 test-mldev: explicitly disabled via build config 00:08:07.143 test-pipeline: explicitly disabled via build config 00:08:07.143 test-pmd: explicitly disabled via build config 00:08:07.143 test-regex: explicitly disabled via build config 00:08:07.143 test-sad: explicitly disabled via build config 00:08:07.143 test-security-perf: explicitly disabled via build config 00:08:07.143 00:08:07.143 libs: 00:08:07.143 argparse: explicitly disabled via build config 00:08:07.143 metrics: explicitly disabled via build config 00:08:07.143 acl: explicitly disabled via build config 00:08:07.143 bbdev: explicitly disabled via build config 00:08:07.143 bitratestats: explicitly disabled via build config 00:08:07.143 bpf: explicitly disabled via build config 00:08:07.143 cfgfile: explicitly disabled via build config 00:08:07.143 distributor: explicitly disabled via build config 00:08:07.143 efd: explicitly disabled via build config 00:08:07.143 eventdev: explicitly disabled via build config 00:08:07.143 dispatcher: explicitly disabled via build config 00:08:07.143 gpudev: explicitly disabled via build config 00:08:07.143 gro: explicitly disabled via build config 00:08:07.143 gso: explicitly disabled via build config 00:08:07.143 ip_frag: explicitly disabled via build config 00:08:07.143 jobstats: explicitly disabled via build config 00:08:07.143 latencystats: explicitly disabled via build config 00:08:07.143 lpm: explicitly disabled via build config 00:08:07.143 member: explicitly disabled via build config 00:08:07.143 pcapng: explicitly disabled via build config 00:08:07.143 rawdev: explicitly disabled via build config 00:08:07.143 regexdev: explicitly disabled via build config 00:08:07.143 mldev: explicitly disabled via build config 00:08:07.143 rib: explicitly disabled via build config 00:08:07.143 sched: explicitly disabled via build config 00:08:07.143 stack: explicitly disabled via build config 00:08:07.143 ipsec: explicitly disabled via build config 00:08:07.143 pdcp: explicitly disabled via build config 00:08:07.143 fib: explicitly disabled via build config 00:08:07.143 port: explicitly disabled via build config 00:08:07.143 pdump: explicitly disabled via build config 00:08:07.143 table: explicitly disabled via build config 00:08:07.143 pipeline: explicitly disabled via build config 00:08:07.143 graph: explicitly disabled via build config 00:08:07.143 node: explicitly disabled via build config 00:08:07.143 00:08:07.143 drivers: 00:08:07.143 common/cpt: not in enabled drivers build config 00:08:07.143 common/dpaax: not in enabled drivers build config 00:08:07.143 common/iavf: not in enabled drivers build config 00:08:07.143 common/idpf: not in enabled drivers build config 00:08:07.143 common/ionic: not in enabled drivers build config 00:08:07.143 common/mvep: not in enabled drivers build config 00:08:07.143 common/octeontx: not in enabled drivers build config 00:08:07.143 bus/auxiliary: not in enabled drivers build config 00:08:07.143 bus/cdx: not in enabled drivers build config 00:08:07.143 bus/dpaa: not in enabled drivers build config 00:08:07.143 bus/fslmc: not in enabled drivers build config 00:08:07.143 bus/ifpga: not in enabled drivers build config 00:08:07.143 bus/platform: not in enabled drivers build config 00:08:07.143 bus/uacce: not in enabled drivers build config 00:08:07.143 bus/vmbus: not in enabled drivers build config 00:08:07.143 common/cnxk: not in enabled drivers build config 00:08:07.143 common/mlx5: not in enabled drivers build config 00:08:07.143 common/nfp: not in enabled drivers build config 00:08:07.143 common/nitrox: not in enabled drivers build config 00:08:07.143 common/qat: not in enabled drivers build config 00:08:07.143 common/sfc_efx: not in enabled drivers build config 00:08:07.143 mempool/bucket: not in enabled drivers build config 00:08:07.143 mempool/cnxk: not in enabled drivers build config 00:08:07.143 mempool/dpaa: not in enabled drivers build config 00:08:07.143 mempool/dpaa2: not in enabled drivers build config 00:08:07.143 mempool/octeontx: not in enabled drivers build config 00:08:07.143 mempool/stack: not in enabled drivers build config 00:08:07.143 dma/cnxk: not in enabled drivers build config 00:08:07.143 dma/dpaa: not in enabled drivers build config 00:08:07.143 dma/dpaa2: not in enabled drivers build config 00:08:07.143 dma/hisilicon: not in enabled drivers build config 00:08:07.143 dma/idxd: not in enabled drivers build config 00:08:07.143 dma/ioat: not in enabled drivers build config 00:08:07.143 dma/skeleton: not in enabled drivers build config 00:08:07.143 net/af_packet: not in enabled drivers build config 00:08:07.143 net/af_xdp: not in enabled drivers build config 00:08:07.143 net/ark: not in enabled drivers build config 00:08:07.143 net/atlantic: not in enabled drivers build config 00:08:07.143 net/avp: not in enabled drivers build config 00:08:07.143 net/axgbe: not in enabled drivers build config 00:08:07.143 net/bnx2x: not in enabled drivers build config 00:08:07.143 net/bnxt: not in enabled drivers build config 00:08:07.143 net/bonding: not in enabled drivers build config 00:08:07.143 net/cnxk: not in enabled drivers build config 00:08:07.143 net/cpfl: not in enabled drivers build config 00:08:07.143 net/cxgbe: not in enabled drivers build config 00:08:07.143 net/dpaa: not in enabled drivers build config 00:08:07.143 net/dpaa2: not in enabled drivers build config 00:08:07.143 net/e1000: not in enabled drivers build config 00:08:07.143 net/ena: not in enabled drivers build config 00:08:07.143 net/enetc: not in enabled drivers build config 00:08:07.143 net/enetfec: not in enabled drivers build config 00:08:07.143 net/enic: not in enabled drivers build config 00:08:07.143 net/failsafe: not in enabled drivers build config 00:08:07.143 net/fm10k: not in enabled drivers build config 00:08:07.143 net/gve: not in enabled drivers build config 00:08:07.143 net/hinic: not in enabled drivers build config 00:08:07.143 net/hns3: not in enabled drivers build config 00:08:07.143 net/i40e: not in enabled drivers build config 00:08:07.143 net/iavf: not in enabled drivers build config 00:08:07.143 net/ice: not in enabled drivers build config 00:08:07.143 net/idpf: not in enabled drivers build config 00:08:07.143 net/igc: not in enabled drivers build config 00:08:07.143 net/ionic: not in enabled drivers build config 00:08:07.143 net/ipn3ke: not in enabled drivers build config 00:08:07.143 net/ixgbe: not in enabled drivers build config 00:08:07.143 net/mana: not in enabled drivers build config 00:08:07.143 net/memif: not in enabled drivers build config 00:08:07.143 net/mlx4: not in enabled drivers build config 00:08:07.143 net/mlx5: not in enabled drivers build config 00:08:07.143 net/mvneta: not in enabled drivers build config 00:08:07.143 net/mvpp2: not in enabled drivers build config 00:08:07.143 net/netvsc: not in enabled drivers build config 00:08:07.143 net/nfb: not in enabled drivers build config 00:08:07.143 net/nfp: not in enabled drivers build config 00:08:07.143 net/ngbe: not in enabled drivers build config 00:08:07.143 net/null: not in enabled drivers build config 00:08:07.143 net/octeontx: not in enabled drivers build config 00:08:07.143 net/octeon_ep: not in enabled drivers build config 00:08:07.143 net/pcap: not in enabled drivers build config 00:08:07.143 net/pfe: not in enabled drivers build config 00:08:07.143 net/qede: not in enabled drivers build config 00:08:07.143 net/ring: not in enabled drivers build config 00:08:07.143 net/sfc: not in enabled drivers build config 00:08:07.143 net/softnic: not in enabled drivers build config 00:08:07.143 net/tap: not in enabled drivers build config 00:08:07.143 net/thunderx: not in enabled drivers build config 00:08:07.143 net/txgbe: not in enabled drivers build config 00:08:07.143 net/vdev_netvsc: not in enabled drivers build config 00:08:07.143 net/vhost: not in enabled drivers build config 00:08:07.144 net/virtio: not in enabled drivers build config 00:08:07.144 net/vmxnet3: not in enabled drivers build config 00:08:07.144 raw/*: missing internal dependency, "rawdev" 00:08:07.144 crypto/armv8: not in enabled drivers build config 00:08:07.144 crypto/bcmfs: not in enabled drivers build config 00:08:07.144 crypto/caam_jr: not in enabled drivers build config 00:08:07.144 crypto/ccp: not in enabled drivers build config 00:08:07.144 crypto/cnxk: not in enabled drivers build config 00:08:07.144 crypto/dpaa_sec: not in enabled drivers build config 00:08:07.144 crypto/dpaa2_sec: not in enabled drivers build config 00:08:07.144 crypto/ipsec_mb: not in enabled drivers build config 00:08:07.144 crypto/mlx5: not in enabled drivers build config 00:08:07.144 crypto/mvsam: not in enabled drivers build config 00:08:07.144 crypto/nitrox: not in enabled drivers build config 00:08:07.144 crypto/null: not in enabled drivers build config 00:08:07.144 crypto/octeontx: not in enabled drivers build config 00:08:07.144 crypto/openssl: not in enabled drivers build config 00:08:07.144 crypto/scheduler: not in enabled drivers build config 00:08:07.144 crypto/uadk: not in enabled drivers build config 00:08:07.144 crypto/virtio: not in enabled drivers build config 00:08:07.144 compress/isal: not in enabled drivers build config 00:08:07.144 compress/mlx5: not in enabled drivers build config 00:08:07.144 compress/nitrox: not in enabled drivers build config 00:08:07.144 compress/octeontx: not in enabled drivers build config 00:08:07.144 compress/zlib: not in enabled drivers build config 00:08:07.144 regex/*: missing internal dependency, "regexdev" 00:08:07.144 ml/*: missing internal dependency, "mldev" 00:08:07.144 vdpa/ifc: not in enabled drivers build config 00:08:07.144 vdpa/mlx5: not in enabled drivers build config 00:08:07.144 vdpa/nfp: not in enabled drivers build config 00:08:07.144 vdpa/sfc: not in enabled drivers build config 00:08:07.144 event/*: missing internal dependency, "eventdev" 00:08:07.144 baseband/*: missing internal dependency, "bbdev" 00:08:07.144 gpu/*: missing internal dependency, "gpudev" 00:08:07.144 00:08:07.144 00:08:07.144 Build targets in project: 84 00:08:07.144 00:08:07.144 DPDK 24.03.0 00:08:07.144 00:08:07.144 User defined options 00:08:07.144 buildtype : debug 00:08:07.144 default_library : shared 00:08:07.144 libdir : lib 00:08:07.144 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:07.144 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:07.144 c_link_args : 00:08:07.144 cpu_instruction_set: native 00:08:07.144 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:08:07.144 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:08:07.144 enable_docs : false 00:08:07.144 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:07.144 enable_kmods : false 00:08:07.144 max_lcores : 128 00:08:07.144 tests : false 00:08:07.144 00:08:07.144 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:07.144 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:08:07.144 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:07.144 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:07.144 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:07.144 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:07.144 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:07.144 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:07.144 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:07.144 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:07.405 [9/267] Linking static target lib/librte_kvargs.a 00:08:07.405 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:07.405 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:07.405 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:07.405 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:07.405 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:07.405 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:07.405 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:07.405 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:07.405 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:07.405 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:07.405 [20/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:07.405 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:07.405 [22/267] Linking static target lib/librte_log.a 00:08:07.405 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:07.405 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:07.405 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:07.405 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:07.405 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:07.405 [28/267] Linking static target lib/librte_pci.a 00:08:07.405 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:07.405 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:07.405 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:07.405 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:07.405 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:07.405 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:07.664 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:07.664 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:07.664 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:07.664 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:07.664 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:07.664 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:07.664 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:07.664 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:07.664 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:07.664 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:07.664 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:07.664 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:07.664 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:07.664 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:07.664 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:07.664 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:07.664 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:07.664 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:07.664 [53/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:07.664 [54/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:07.664 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:07.664 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:07.664 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:07.664 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:07.664 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:07.664 [60/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:07.665 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:07.665 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:07.665 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:07.665 [64/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:07.665 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:07.950 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:07.950 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:07.950 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:07.950 [69/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:07.950 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:07.950 [71/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:07.950 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:07.950 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:07.950 [74/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:07.950 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:07.950 [76/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:07.950 [77/267] Linking static target lib/librte_meter.a 00:08:07.950 [78/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:07.950 [79/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:07.950 [80/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:07.950 [81/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:07.950 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:07.950 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:07.950 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:07.950 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:07.950 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:07.950 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:07.950 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:07.950 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:07.950 [90/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:07.950 [91/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:07.950 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:07.950 [93/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:07.950 [94/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:07.950 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:07.950 [96/267] Linking static target lib/librte_telemetry.a 00:08:07.950 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:07.950 [98/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:07.950 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:07.950 [100/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:07.950 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:07.950 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:07.950 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:08:07.950 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:07.950 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:07.950 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:07.950 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:07.950 [108/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:07.950 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:07.950 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:07.950 [111/267] Linking static target lib/librte_ring.a 00:08:07.950 [112/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:07.950 [113/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:07.950 [114/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:07.950 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:07.950 [116/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:07.950 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:07.950 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:07.950 [119/267] Linking static target lib/librte_rcu.a 00:08:07.950 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:07.950 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:07.950 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:07.950 [123/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:07.950 [124/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:07.950 [125/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:07.950 [126/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:07.950 [127/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:07.950 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:07.950 [129/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:07.950 [130/267] Linking static target lib/librte_cmdline.a 00:08:07.950 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:07.950 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:07.950 [133/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:07.950 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:07.950 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:07.950 [136/267] Linking static target lib/librte_timer.a 00:08:07.950 [137/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:07.950 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:07.950 [139/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:07.950 [140/267] Linking static target lib/librte_net.a 00:08:07.950 [141/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:07.950 [142/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:07.950 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:07.950 [144/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:07.950 [145/267] Linking static target lib/librte_dmadev.a 00:08:07.950 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:07.950 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:07.950 [148/267] Linking static target lib/librte_power.a 00:08:07.950 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:07.950 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:07.950 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:07.950 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:07.950 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:07.950 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:07.950 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:07.950 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:07.950 [157/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:07.950 [158/267] Linking static target lib/librte_eal.a 00:08:07.950 [159/267] Linking static target lib/librte_mempool.a 00:08:07.950 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:07.950 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:07.950 [162/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:07.950 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:07.950 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:07.950 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:07.950 [166/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:07.950 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:07.950 [168/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:07.950 [169/267] Linking static target lib/librte_compressdev.a 00:08:07.950 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:07.950 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:07.950 [172/267] Linking target lib/librte_log.so.24.1 00:08:07.950 [173/267] Linking static target lib/librte_reorder.a 00:08:07.950 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:08.211 [175/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:08.211 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:08.211 [177/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.211 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:08.211 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:08.211 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:08.211 [181/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:08.211 [182/267] Linking static target lib/librte_security.a 00:08:08.211 [183/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:08.211 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:08.211 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:08.211 [186/267] Linking static target lib/librte_mbuf.a 00:08:08.211 [187/267] Linking static target drivers/librte_bus_vdev.a 00:08:08.211 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:08.211 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:08.211 [190/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:08.211 [191/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:08.211 [192/267] Linking static target lib/librte_hash.a 00:08:08.211 [193/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:08.211 [194/267] Linking target lib/librte_kvargs.so.24.1 00:08:08.211 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:08.211 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:08.211 [197/267] Linking static target drivers/librte_bus_pci.a 00:08:08.211 [198/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.211 [199/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.211 [200/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.211 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:08.473 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:08.473 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:08.473 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:08.473 [205/267] Linking static target drivers/librte_mempool_ring.a 00:08:08.473 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:08.473 [207/267] Linking static target lib/librte_cryptodev.a 00:08:08.473 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:08.473 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.473 [210/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.473 [211/267] Linking target lib/librte_telemetry.so.24.1 00:08:08.473 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.473 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.733 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:08.733 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.733 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.733 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:08.733 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.733 [219/267] Linking static target lib/librte_ethdev.a 00:08:08.733 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:08.994 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.994 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.994 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.994 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.256 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.256 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.827 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:10.088 [228/267] Linking static target lib/librte_vhost.a 00:08:10.660 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:12.047 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:18.633 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.573 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.573 [233/267] Linking target lib/librte_eal.so.24.1 00:08:19.573 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:19.573 [235/267] Linking target lib/librte_dmadev.so.24.1 00:08:19.573 [236/267] Linking target lib/librte_ring.so.24.1 00:08:19.573 [237/267] Linking target lib/librte_meter.so.24.1 00:08:19.573 [238/267] Linking target lib/librte_pci.so.24.1 00:08:19.573 [239/267] Linking target lib/librte_timer.so.24.1 00:08:19.573 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:08:19.834 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:19.834 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:19.834 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:19.834 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:19.834 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:19.834 [246/267] Linking target lib/librte_rcu.so.24.1 00:08:19.834 [247/267] Linking target lib/librte_mempool.so.24.1 00:08:19.834 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:08:19.834 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:19.834 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:20.095 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:08:20.095 [252/267] Linking target lib/librte_mbuf.so.24.1 00:08:20.095 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:20.095 [254/267] Linking target lib/librte_net.so.24.1 00:08:20.095 [255/267] Linking target lib/librte_compressdev.so.24.1 00:08:20.095 [256/267] Linking target lib/librte_reorder.so.24.1 00:08:20.095 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:08:20.356 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:20.356 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:20.356 [260/267] Linking target lib/librte_hash.so.24.1 00:08:20.356 [261/267] Linking target lib/librte_cmdline.so.24.1 00:08:20.356 [262/267] Linking target lib/librte_security.so.24.1 00:08:20.356 [263/267] Linking target lib/librte_ethdev.so.24.1 00:08:20.616 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:20.616 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:20.616 [266/267] Linking target lib/librte_power.so.24.1 00:08:20.616 [267/267] Linking target lib/librte_vhost.so.24.1 00:08:20.616 INFO: autodetecting backend as ninja 00:08:20.617 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:08:25.906 CC lib/ut/ut.o 00:08:25.906 CC lib/ut_mock/mock.o 00:08:25.906 CC lib/log/log.o 00:08:25.906 CC lib/log/log_flags.o 00:08:25.906 CC lib/log/log_deprecated.o 00:08:25.906 LIB libspdk_ut.a 00:08:25.906 LIB libspdk_ut_mock.a 00:08:25.906 LIB libspdk_log.a 00:08:25.906 SO libspdk_ut.so.2.0 00:08:25.906 SO libspdk_ut_mock.so.6.0 00:08:25.906 SO libspdk_log.so.7.1 00:08:25.906 SYMLINK libspdk_ut.so 00:08:25.906 SYMLINK libspdk_ut_mock.so 00:08:25.906 SYMLINK libspdk_log.so 00:08:25.906 CC lib/dma/dma.o 00:08:25.906 CC lib/util/base64.o 00:08:25.906 CC lib/util/bit_array.o 00:08:25.906 CC lib/util/cpuset.o 00:08:25.906 CC lib/util/crc16.o 00:08:25.906 CC lib/util/crc32.o 00:08:25.906 CC lib/util/crc32c.o 00:08:25.906 CC lib/util/crc32_ieee.o 00:08:25.906 CC lib/util/crc64.o 00:08:25.906 CC lib/ioat/ioat.o 00:08:25.906 CXX lib/trace_parser/trace.o 00:08:25.906 CC lib/util/dif.o 00:08:25.906 CC lib/util/fd.o 00:08:25.906 CC lib/util/fd_group.o 00:08:25.906 CC lib/util/file.o 00:08:25.906 CC lib/util/hexlify.o 00:08:25.906 CC lib/util/iov.o 00:08:25.906 CC lib/util/math.o 00:08:25.906 CC lib/util/net.o 00:08:25.906 CC lib/util/pipe.o 00:08:25.906 CC lib/util/strerror_tls.o 00:08:25.906 CC lib/util/string.o 00:08:25.906 CC lib/util/uuid.o 00:08:25.906 CC lib/util/xor.o 00:08:25.906 CC lib/util/zipf.o 00:08:25.906 CC lib/util/md5.o 00:08:25.906 CC lib/vfio_user/host/vfio_user_pci.o 00:08:25.906 CC lib/vfio_user/host/vfio_user.o 00:08:25.906 LIB libspdk_dma.a 00:08:26.168 SO libspdk_dma.so.5.0 00:08:26.168 LIB libspdk_ioat.a 00:08:26.168 SYMLINK libspdk_dma.so 00:08:26.168 SO libspdk_ioat.so.7.0 00:08:26.168 SYMLINK libspdk_ioat.so 00:08:26.168 LIB libspdk_vfio_user.a 00:08:26.168 SO libspdk_vfio_user.so.5.0 00:08:26.429 LIB libspdk_util.a 00:08:26.429 SYMLINK libspdk_vfio_user.so 00:08:26.429 SO libspdk_util.so.10.1 00:08:26.429 SYMLINK libspdk_util.so 00:08:26.690 LIB libspdk_trace_parser.a 00:08:26.690 SO libspdk_trace_parser.so.6.0 00:08:26.690 SYMLINK libspdk_trace_parser.so 00:08:26.948 CC lib/rdma_provider/common.o 00:08:26.948 CC lib/json/json_parse.o 00:08:26.948 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:26.948 CC lib/json/json_util.o 00:08:26.948 CC lib/json/json_write.o 00:08:26.948 CC lib/conf/conf.o 00:08:26.948 CC lib/vmd/vmd.o 00:08:26.948 CC lib/rdma_utils/rdma_utils.o 00:08:26.948 CC lib/vmd/led.o 00:08:26.948 CC lib/env_dpdk/env.o 00:08:26.948 CC lib/env_dpdk/pci.o 00:08:26.948 CC lib/env_dpdk/memory.o 00:08:26.948 CC lib/idxd/idxd.o 00:08:26.948 CC lib/idxd/idxd_user.o 00:08:26.948 CC lib/env_dpdk/init.o 00:08:26.948 CC lib/env_dpdk/threads.o 00:08:26.948 CC lib/idxd/idxd_kernel.o 00:08:26.948 CC lib/env_dpdk/pci_ioat.o 00:08:26.949 CC lib/env_dpdk/pci_virtio.o 00:08:26.949 CC lib/env_dpdk/pci_vmd.o 00:08:26.949 CC lib/env_dpdk/pci_idxd.o 00:08:26.949 CC lib/env_dpdk/pci_event.o 00:08:26.949 CC lib/env_dpdk/sigbus_handler.o 00:08:26.949 CC lib/env_dpdk/pci_dpdk.o 00:08:26.949 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:26.949 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:27.208 LIB libspdk_rdma_provider.a 00:08:27.208 LIB libspdk_conf.a 00:08:27.208 SO libspdk_rdma_provider.so.6.0 00:08:27.208 SO libspdk_conf.so.6.0 00:08:27.208 LIB libspdk_rdma_utils.a 00:08:27.208 LIB libspdk_json.a 00:08:27.208 SYMLINK libspdk_conf.so 00:08:27.208 SO libspdk_rdma_utils.so.1.0 00:08:27.208 SYMLINK libspdk_rdma_provider.so 00:08:27.208 SO libspdk_json.so.6.0 00:08:27.208 SYMLINK libspdk_rdma_utils.so 00:08:27.208 SYMLINK libspdk_json.so 00:08:27.469 LIB libspdk_idxd.a 00:08:27.469 LIB libspdk_vmd.a 00:08:27.469 SO libspdk_idxd.so.12.1 00:08:27.469 SO libspdk_vmd.so.6.0 00:08:27.469 SYMLINK libspdk_idxd.so 00:08:27.469 SYMLINK libspdk_vmd.so 00:08:27.731 CC lib/jsonrpc/jsonrpc_server.o 00:08:27.731 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:27.731 CC lib/jsonrpc/jsonrpc_client.o 00:08:27.731 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:27.991 LIB libspdk_jsonrpc.a 00:08:27.991 SO libspdk_jsonrpc.so.6.0 00:08:27.991 SYMLINK libspdk_jsonrpc.so 00:08:28.253 LIB libspdk_env_dpdk.a 00:08:28.253 SO libspdk_env_dpdk.so.15.1 00:08:28.253 SYMLINK libspdk_env_dpdk.so 00:08:28.514 CC lib/rpc/rpc.o 00:08:28.514 LIB libspdk_rpc.a 00:08:28.775 SO libspdk_rpc.so.6.0 00:08:28.775 SYMLINK libspdk_rpc.so 00:08:29.037 CC lib/keyring/keyring.o 00:08:29.037 CC lib/keyring/keyring_rpc.o 00:08:29.037 CC lib/trace/trace.o 00:08:29.037 CC lib/trace/trace_flags.o 00:08:29.037 CC lib/trace/trace_rpc.o 00:08:29.037 CC lib/notify/notify.o 00:08:29.037 CC lib/notify/notify_rpc.o 00:08:29.300 LIB libspdk_notify.a 00:08:29.300 SO libspdk_notify.so.6.0 00:08:29.300 LIB libspdk_keyring.a 00:08:29.300 LIB libspdk_trace.a 00:08:29.300 SO libspdk_keyring.so.2.0 00:08:29.300 SO libspdk_trace.so.11.0 00:08:29.300 SYMLINK libspdk_notify.so 00:08:29.300 SYMLINK libspdk_keyring.so 00:08:29.562 SYMLINK libspdk_trace.so 00:08:29.825 CC lib/thread/thread.o 00:08:29.825 CC lib/thread/iobuf.o 00:08:29.825 CC lib/sock/sock.o 00:08:29.825 CC lib/sock/sock_rpc.o 00:08:30.087 LIB libspdk_sock.a 00:08:30.347 SO libspdk_sock.so.10.0 00:08:30.348 SYMLINK libspdk_sock.so 00:08:30.608 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:30.608 CC lib/nvme/nvme_ctrlr.o 00:08:30.608 CC lib/nvme/nvme_fabric.o 00:08:30.608 CC lib/nvme/nvme_pcie_common.o 00:08:30.608 CC lib/nvme/nvme_ns_cmd.o 00:08:30.608 CC lib/nvme/nvme_ns.o 00:08:30.608 CC lib/nvme/nvme_pcie.o 00:08:30.608 CC lib/nvme/nvme_qpair.o 00:08:30.608 CC lib/nvme/nvme.o 00:08:30.608 CC lib/nvme/nvme_quirks.o 00:08:30.608 CC lib/nvme/nvme_transport.o 00:08:30.608 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:30.608 CC lib/nvme/nvme_discovery.o 00:08:30.608 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:30.608 CC lib/nvme/nvme_tcp.o 00:08:30.608 CC lib/nvme/nvme_opal.o 00:08:30.608 CC lib/nvme/nvme_io_msg.o 00:08:30.608 CC lib/nvme/nvme_poll_group.o 00:08:30.608 CC lib/nvme/nvme_zns.o 00:08:30.608 CC lib/nvme/nvme_stubs.o 00:08:30.608 CC lib/nvme/nvme_auth.o 00:08:30.608 CC lib/nvme/nvme_cuse.o 00:08:30.608 CC lib/nvme/nvme_vfio_user.o 00:08:30.608 CC lib/nvme/nvme_rdma.o 00:08:31.179 LIB libspdk_thread.a 00:08:31.179 SO libspdk_thread.so.11.0 00:08:31.179 SYMLINK libspdk_thread.so 00:08:31.440 CC lib/accel/accel_rpc.o 00:08:31.440 CC lib/accel/accel.o 00:08:31.440 CC lib/accel/accel_sw.o 00:08:31.440 CC lib/init/json_config.o 00:08:31.440 CC lib/blob/request.o 00:08:31.440 CC lib/init/subsystem_rpc.o 00:08:31.440 CC lib/blob/blobstore.o 00:08:31.440 CC lib/init/subsystem.o 00:08:31.440 CC lib/blob/zeroes.o 00:08:31.440 CC lib/init/rpc.o 00:08:31.440 CC lib/fsdev/fsdev.o 00:08:31.440 CC lib/virtio/virtio.o 00:08:31.440 CC lib/blob/blob_bs_dev.o 00:08:31.440 CC lib/fsdev/fsdev_io.o 00:08:31.440 CC lib/virtio/virtio_vhost_user.o 00:08:31.440 CC lib/vfu_tgt/tgt_endpoint.o 00:08:31.440 CC lib/virtio/virtio_vfio_user.o 00:08:31.440 CC lib/fsdev/fsdev_rpc.o 00:08:31.440 CC lib/virtio/virtio_pci.o 00:08:31.440 CC lib/vfu_tgt/tgt_rpc.o 00:08:31.701 LIB libspdk_init.a 00:08:31.963 SO libspdk_init.so.6.0 00:08:31.963 LIB libspdk_virtio.a 00:08:31.963 LIB libspdk_vfu_tgt.a 00:08:31.963 SO libspdk_virtio.so.7.0 00:08:31.963 SYMLINK libspdk_init.so 00:08:31.963 SO libspdk_vfu_tgt.so.3.0 00:08:31.963 SYMLINK libspdk_virtio.so 00:08:31.963 SYMLINK libspdk_vfu_tgt.so 00:08:32.225 LIB libspdk_fsdev.a 00:08:32.225 SO libspdk_fsdev.so.2.0 00:08:32.225 SYMLINK libspdk_fsdev.so 00:08:32.225 CC lib/event/app.o 00:08:32.225 CC lib/event/reactor.o 00:08:32.225 CC lib/event/log_rpc.o 00:08:32.225 CC lib/event/app_rpc.o 00:08:32.225 CC lib/event/scheduler_static.o 00:08:32.487 LIB libspdk_accel.a 00:08:32.487 SO libspdk_accel.so.16.0 00:08:32.487 LIB libspdk_nvme.a 00:08:32.487 SYMLINK libspdk_accel.so 00:08:32.487 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:32.747 LIB libspdk_event.a 00:08:32.747 SO libspdk_nvme.so.15.0 00:08:32.747 SO libspdk_event.so.14.0 00:08:32.747 SYMLINK libspdk_event.so 00:08:33.008 SYMLINK libspdk_nvme.so 00:08:33.008 CC lib/bdev/bdev.o 00:08:33.008 CC lib/bdev/bdev_rpc.o 00:08:33.008 CC lib/bdev/bdev_zone.o 00:08:33.008 CC lib/bdev/part.o 00:08:33.008 CC lib/bdev/scsi_nvme.o 00:08:33.269 LIB libspdk_fuse_dispatcher.a 00:08:33.269 SO libspdk_fuse_dispatcher.so.1.0 00:08:33.269 SYMLINK libspdk_fuse_dispatcher.so 00:08:34.212 LIB libspdk_blob.a 00:08:34.212 SO libspdk_blob.so.11.0 00:08:34.212 SYMLINK libspdk_blob.so 00:08:34.800 CC lib/blobfs/blobfs.o 00:08:34.800 CC lib/blobfs/tree.o 00:08:34.800 CC lib/lvol/lvol.o 00:08:35.373 LIB libspdk_bdev.a 00:08:35.373 SO libspdk_bdev.so.17.0 00:08:35.373 LIB libspdk_blobfs.a 00:08:35.373 SYMLINK libspdk_bdev.so 00:08:35.373 SO libspdk_blobfs.so.10.0 00:08:35.373 LIB libspdk_lvol.a 00:08:35.635 SO libspdk_lvol.so.10.0 00:08:35.635 SYMLINK libspdk_blobfs.so 00:08:35.635 SYMLINK libspdk_lvol.so 00:08:35.896 CC lib/scsi/dev.o 00:08:35.896 CC lib/scsi/lun.o 00:08:35.896 CC lib/scsi/port.o 00:08:35.896 CC lib/scsi/scsi.o 00:08:35.896 CC lib/scsi/scsi_bdev.o 00:08:35.896 CC lib/scsi/scsi_pr.o 00:08:35.896 CC lib/scsi/scsi_rpc.o 00:08:35.896 CC lib/scsi/task.o 00:08:35.896 CC lib/nbd/nbd.o 00:08:35.896 CC lib/nbd/nbd_rpc.o 00:08:35.896 CC lib/ftl/ftl_core.o 00:08:35.896 CC lib/ftl/ftl_init.o 00:08:35.896 CC lib/ftl/ftl_debug.o 00:08:35.896 CC lib/ftl/ftl_layout.o 00:08:35.896 CC lib/ftl/ftl_io.o 00:08:35.896 CC lib/ftl/ftl_sb.o 00:08:35.896 CC lib/nvmf/ctrlr.o 00:08:35.896 CC lib/ftl/ftl_l2p.o 00:08:35.896 CC lib/ftl/ftl_l2p_flat.o 00:08:35.896 CC lib/nvmf/ctrlr_discovery.o 00:08:35.896 CC lib/ftl/ftl_band_ops.o 00:08:35.896 CC lib/ftl/ftl_nv_cache.o 00:08:35.896 CC lib/nvmf/ctrlr_bdev.o 00:08:35.896 CC lib/ublk/ublk.o 00:08:35.896 CC lib/ftl/ftl_band.o 00:08:35.896 CC lib/nvmf/subsystem.o 00:08:35.896 CC lib/ublk/ublk_rpc.o 00:08:35.896 CC lib/nvmf/nvmf.o 00:08:35.896 CC lib/ftl/ftl_writer.o 00:08:35.896 CC lib/nvmf/nvmf_rpc.o 00:08:35.896 CC lib/ftl/ftl_rq.o 00:08:35.896 CC lib/nvmf/transport.o 00:08:35.896 CC lib/nvmf/tcp.o 00:08:35.896 CC lib/ftl/ftl_reloc.o 00:08:35.896 CC lib/nvmf/stubs.o 00:08:35.896 CC lib/nvmf/vfio_user.o 00:08:35.896 CC lib/ftl/ftl_l2p_cache.o 00:08:35.896 CC lib/ftl/ftl_p2l.o 00:08:35.896 CC lib/nvmf/mdns_server.o 00:08:35.896 CC lib/ftl/ftl_p2l_log.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt.o 00:08:35.896 CC lib/nvmf/rdma.o 00:08:35.896 CC lib/nvmf/auth.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:35.896 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:35.896 CC lib/ftl/utils/ftl_conf.o 00:08:35.896 CC lib/ftl/utils/ftl_mempool.o 00:08:35.896 CC lib/ftl/utils/ftl_md.o 00:08:35.896 CC lib/ftl/utils/ftl_bitmap.o 00:08:35.896 CC lib/ftl/utils/ftl_property.o 00:08:35.896 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:35.896 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:35.896 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:35.896 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:35.896 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:35.896 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:35.896 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:35.896 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:35.896 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:35.896 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:35.896 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:35.896 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:35.896 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:35.896 CC lib/ftl/base/ftl_base_dev.o 00:08:35.896 CC lib/ftl/base/ftl_base_bdev.o 00:08:35.896 CC lib/ftl/ftl_trace.o 00:08:36.466 LIB libspdk_nbd.a 00:08:36.466 SO libspdk_nbd.so.7.0 00:08:36.466 LIB libspdk_scsi.a 00:08:36.466 SO libspdk_scsi.so.9.0 00:08:36.466 SYMLINK libspdk_nbd.so 00:08:36.466 SYMLINK libspdk_scsi.so 00:08:36.466 LIB libspdk_ublk.a 00:08:36.466 SO libspdk_ublk.so.3.0 00:08:36.728 SYMLINK libspdk_ublk.so 00:08:36.728 LIB libspdk_ftl.a 00:08:36.728 CC lib/iscsi/conn.o 00:08:36.728 CC lib/vhost/vhost_rpc.o 00:08:36.728 CC lib/vhost/vhost.o 00:08:36.728 CC lib/iscsi/init_grp.o 00:08:36.728 CC lib/iscsi/iscsi.o 00:08:36.728 CC lib/vhost/rte_vhost_user.o 00:08:36.728 CC lib/vhost/vhost_scsi.o 00:08:36.728 CC lib/iscsi/param.o 00:08:36.728 CC lib/vhost/vhost_blk.o 00:08:36.728 CC lib/iscsi/portal_grp.o 00:08:36.728 CC lib/iscsi/tgt_node.o 00:08:36.728 CC lib/iscsi/iscsi_subsystem.o 00:08:36.728 CC lib/iscsi/iscsi_rpc.o 00:08:36.728 CC lib/iscsi/task.o 00:08:36.990 SO libspdk_ftl.so.9.0 00:08:37.251 SYMLINK libspdk_ftl.so 00:08:37.823 LIB libspdk_nvmf.a 00:08:37.823 SO libspdk_nvmf.so.20.0 00:08:37.823 LIB libspdk_vhost.a 00:08:37.823 SO libspdk_vhost.so.8.0 00:08:38.083 SYMLINK libspdk_vhost.so 00:08:38.083 SYMLINK libspdk_nvmf.so 00:08:38.083 LIB libspdk_iscsi.a 00:08:38.083 SO libspdk_iscsi.so.8.0 00:08:38.344 SYMLINK libspdk_iscsi.so 00:08:38.917 CC module/vfu_device/vfu_virtio_blk.o 00:08:38.917 CC module/vfu_device/vfu_virtio.o 00:08:38.917 CC module/vfu_device/vfu_virtio_scsi.o 00:08:38.917 CC module/vfu_device/vfu_virtio_rpc.o 00:08:38.917 CC module/vfu_device/vfu_virtio_fs.o 00:08:38.917 CC module/env_dpdk/env_dpdk_rpc.o 00:08:38.917 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:38.917 CC module/sock/posix/posix.o 00:08:38.917 CC module/accel/dsa/accel_dsa.o 00:08:38.917 CC module/accel/dsa/accel_dsa_rpc.o 00:08:38.917 CC module/accel/error/accel_error.o 00:08:38.917 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:38.917 CC module/accel/error/accel_error_rpc.o 00:08:38.917 CC module/keyring/linux/keyring.o 00:08:38.917 CC module/keyring/linux/keyring_rpc.o 00:08:38.917 CC module/accel/iaa/accel_iaa.o 00:08:38.917 CC module/accel/iaa/accel_iaa_rpc.o 00:08:38.917 LIB libspdk_env_dpdk_rpc.a 00:08:38.917 CC module/scheduler/gscheduler/gscheduler.o 00:08:38.917 CC module/accel/ioat/accel_ioat.o 00:08:38.917 CC module/fsdev/aio/fsdev_aio.o 00:08:38.917 CC module/keyring/file/keyring.o 00:08:38.917 CC module/accel/ioat/accel_ioat_rpc.o 00:08:38.917 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:38.917 CC module/keyring/file/keyring_rpc.o 00:08:38.917 CC module/fsdev/aio/linux_aio_mgr.o 00:08:38.917 CC module/blob/bdev/blob_bdev.o 00:08:38.917 SO libspdk_env_dpdk_rpc.so.6.0 00:08:39.179 SYMLINK libspdk_env_dpdk_rpc.so 00:08:39.179 LIB libspdk_keyring_linux.a 00:08:39.179 LIB libspdk_scheduler_dpdk_governor.a 00:08:39.179 LIB libspdk_scheduler_gscheduler.a 00:08:39.179 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:39.179 SO libspdk_keyring_linux.so.1.0 00:08:39.179 LIB libspdk_keyring_file.a 00:08:39.179 LIB libspdk_accel_error.a 00:08:39.179 SO libspdk_scheduler_gscheduler.so.4.0 00:08:39.179 LIB libspdk_scheduler_dynamic.a 00:08:39.179 SO libspdk_keyring_file.so.2.0 00:08:39.179 LIB libspdk_accel_iaa.a 00:08:39.179 LIB libspdk_accel_ioat.a 00:08:39.179 SO libspdk_accel_error.so.2.0 00:08:39.179 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:39.179 SO libspdk_accel_iaa.so.3.0 00:08:39.179 SO libspdk_scheduler_dynamic.so.4.0 00:08:39.179 SYMLINK libspdk_keyring_linux.so 00:08:39.179 SYMLINK libspdk_scheduler_gscheduler.so 00:08:39.179 LIB libspdk_accel_dsa.a 00:08:39.179 SO libspdk_accel_ioat.so.6.0 00:08:39.179 SYMLINK libspdk_keyring_file.so 00:08:39.179 LIB libspdk_blob_bdev.a 00:08:39.179 SYMLINK libspdk_accel_error.so 00:08:39.179 SO libspdk_accel_dsa.so.5.0 00:08:39.179 SYMLINK libspdk_accel_iaa.so 00:08:39.179 SYMLINK libspdk_scheduler_dynamic.so 00:08:39.441 SO libspdk_blob_bdev.so.11.0 00:08:39.441 SYMLINK libspdk_accel_ioat.so 00:08:39.441 SYMLINK libspdk_accel_dsa.so 00:08:39.441 LIB libspdk_vfu_device.a 00:08:39.441 SYMLINK libspdk_blob_bdev.so 00:08:39.441 SO libspdk_vfu_device.so.3.0 00:08:39.441 SYMLINK libspdk_vfu_device.so 00:08:39.441 LIB libspdk_fsdev_aio.a 00:08:39.702 SO libspdk_fsdev_aio.so.1.0 00:08:39.702 LIB libspdk_sock_posix.a 00:08:39.702 SYMLINK libspdk_fsdev_aio.so 00:08:39.702 SO libspdk_sock_posix.so.6.0 00:08:39.702 SYMLINK libspdk_sock_posix.so 00:08:39.963 CC module/bdev/delay/vbdev_delay.o 00:08:39.963 CC module/bdev/lvol/vbdev_lvol.o 00:08:39.963 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:39.963 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:39.963 CC module/bdev/gpt/gpt.o 00:08:39.963 CC module/bdev/error/vbdev_error.o 00:08:39.963 CC module/bdev/gpt/vbdev_gpt.o 00:08:39.963 CC module/bdev/error/vbdev_error_rpc.o 00:08:39.963 CC module/bdev/passthru/vbdev_passthru.o 00:08:39.963 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:39.963 CC module/bdev/null/bdev_null.o 00:08:39.963 CC module/bdev/null/bdev_null_rpc.o 00:08:39.963 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:39.963 CC module/bdev/malloc/bdev_malloc.o 00:08:39.963 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:39.963 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:39.963 CC module/bdev/nvme/bdev_nvme.o 00:08:39.963 CC module/bdev/ftl/bdev_ftl.o 00:08:39.963 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:39.963 CC module/blobfs/bdev/blobfs_bdev.o 00:08:39.963 CC module/bdev/raid/bdev_raid.o 00:08:39.963 CC module/bdev/raid/bdev_raid_rpc.o 00:08:39.963 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:39.963 CC module/bdev/nvme/nvme_rpc.o 00:08:39.963 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:39.963 CC module/bdev/nvme/bdev_mdns_client.o 00:08:39.963 CC module/bdev/raid/bdev_raid_sb.o 00:08:39.963 CC module/bdev/iscsi/bdev_iscsi.o 00:08:39.963 CC module/bdev/raid/raid0.o 00:08:39.963 CC module/bdev/split/vbdev_split.o 00:08:39.963 CC module/bdev/aio/bdev_aio.o 00:08:39.963 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:39.963 CC module/bdev/nvme/vbdev_opal.o 00:08:39.963 CC module/bdev/aio/bdev_aio_rpc.o 00:08:39.963 CC module/bdev/raid/raid1.o 00:08:39.963 CC module/bdev/split/vbdev_split_rpc.o 00:08:39.964 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:39.964 CC module/bdev/raid/concat.o 00:08:39.964 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:39.964 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:39.964 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:39.964 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:40.223 LIB libspdk_blobfs_bdev.a 00:08:40.223 LIB libspdk_bdev_null.a 00:08:40.223 SO libspdk_blobfs_bdev.so.6.0 00:08:40.223 LIB libspdk_bdev_split.a 00:08:40.223 LIB libspdk_bdev_gpt.a 00:08:40.223 LIB libspdk_bdev_error.a 00:08:40.223 SO libspdk_bdev_null.so.6.0 00:08:40.223 SO libspdk_bdev_gpt.so.6.0 00:08:40.223 SO libspdk_bdev_split.so.6.0 00:08:40.223 LIB libspdk_bdev_ftl.a 00:08:40.223 SO libspdk_bdev_error.so.6.0 00:08:40.223 SYMLINK libspdk_bdev_null.so 00:08:40.223 SYMLINK libspdk_blobfs_bdev.so 00:08:40.223 SO libspdk_bdev_ftl.so.6.0 00:08:40.223 LIB libspdk_bdev_zone_block.a 00:08:40.223 LIB libspdk_bdev_malloc.a 00:08:40.223 LIB libspdk_bdev_passthru.a 00:08:40.223 SYMLINK libspdk_bdev_error.so 00:08:40.223 LIB libspdk_bdev_delay.a 00:08:40.223 SYMLINK libspdk_bdev_gpt.so 00:08:40.223 LIB libspdk_bdev_iscsi.a 00:08:40.223 LIB libspdk_bdev_aio.a 00:08:40.223 SYMLINK libspdk_bdev_split.so 00:08:40.484 SO libspdk_bdev_passthru.so.6.0 00:08:40.484 SO libspdk_bdev_zone_block.so.6.0 00:08:40.484 SO libspdk_bdev_malloc.so.6.0 00:08:40.484 SO libspdk_bdev_delay.so.6.0 00:08:40.484 SYMLINK libspdk_bdev_ftl.so 00:08:40.484 SO libspdk_bdev_iscsi.so.6.0 00:08:40.484 SO libspdk_bdev_aio.so.6.0 00:08:40.484 SYMLINK libspdk_bdev_passthru.so 00:08:40.484 SYMLINK libspdk_bdev_zone_block.so 00:08:40.484 SYMLINK libspdk_bdev_delay.so 00:08:40.484 LIB libspdk_bdev_lvol.a 00:08:40.484 SYMLINK libspdk_bdev_malloc.so 00:08:40.484 SYMLINK libspdk_bdev_iscsi.so 00:08:40.484 SYMLINK libspdk_bdev_aio.so 00:08:40.484 LIB libspdk_bdev_virtio.a 00:08:40.484 SO libspdk_bdev_lvol.so.6.0 00:08:40.484 SO libspdk_bdev_virtio.so.6.0 00:08:40.484 SYMLINK libspdk_bdev_lvol.so 00:08:40.484 SYMLINK libspdk_bdev_virtio.so 00:08:41.057 LIB libspdk_bdev_raid.a 00:08:41.057 SO libspdk_bdev_raid.so.6.0 00:08:41.057 SYMLINK libspdk_bdev_raid.so 00:08:42.443 LIB libspdk_bdev_nvme.a 00:08:42.444 SO libspdk_bdev_nvme.so.7.1 00:08:42.444 SYMLINK libspdk_bdev_nvme.so 00:08:43.019 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:08:43.019 CC module/event/subsystems/scheduler/scheduler.o 00:08:43.019 CC module/event/subsystems/vmd/vmd.o 00:08:43.019 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:43.019 CC module/event/subsystems/iobuf/iobuf.o 00:08:43.019 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:43.019 CC module/event/subsystems/sock/sock.o 00:08:43.019 CC module/event/subsystems/keyring/keyring.o 00:08:43.019 CC module/event/subsystems/fsdev/fsdev.o 00:08:43.019 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:43.280 LIB libspdk_event_vmd.a 00:08:43.280 LIB libspdk_event_vfu_tgt.a 00:08:43.280 LIB libspdk_event_scheduler.a 00:08:43.280 LIB libspdk_event_keyring.a 00:08:43.280 LIB libspdk_event_sock.a 00:08:43.280 LIB libspdk_event_fsdev.a 00:08:43.280 LIB libspdk_event_vhost_blk.a 00:08:43.280 LIB libspdk_event_iobuf.a 00:08:43.280 SO libspdk_event_vmd.so.6.0 00:08:43.280 SO libspdk_event_vfu_tgt.so.3.0 00:08:43.280 SO libspdk_event_scheduler.so.4.0 00:08:43.280 SO libspdk_event_sock.so.5.0 00:08:43.280 SO libspdk_event_keyring.so.1.0 00:08:43.280 SO libspdk_event_vhost_blk.so.3.0 00:08:43.280 SO libspdk_event_fsdev.so.1.0 00:08:43.280 SO libspdk_event_iobuf.so.3.0 00:08:43.280 SYMLINK libspdk_event_vmd.so 00:08:43.280 SYMLINK libspdk_event_vfu_tgt.so 00:08:43.280 SYMLINK libspdk_event_scheduler.so 00:08:43.280 SYMLINK libspdk_event_keyring.so 00:08:43.280 SYMLINK libspdk_event_sock.so 00:08:43.280 SYMLINK libspdk_event_vhost_blk.so 00:08:43.280 SYMLINK libspdk_event_fsdev.so 00:08:43.280 SYMLINK libspdk_event_iobuf.so 00:08:43.853 CC module/event/subsystems/accel/accel.o 00:08:43.853 LIB libspdk_event_accel.a 00:08:43.853 SO libspdk_event_accel.so.6.0 00:08:43.853 SYMLINK libspdk_event_accel.so 00:08:44.426 CC module/event/subsystems/bdev/bdev.o 00:08:44.426 LIB libspdk_event_bdev.a 00:08:44.426 SO libspdk_event_bdev.so.6.0 00:08:44.688 SYMLINK libspdk_event_bdev.so 00:08:44.950 CC module/event/subsystems/ublk/ublk.o 00:08:44.950 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:44.950 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:44.950 CC module/event/subsystems/nbd/nbd.o 00:08:44.951 CC module/event/subsystems/scsi/scsi.o 00:08:45.212 LIB libspdk_event_ublk.a 00:08:45.212 LIB libspdk_event_nbd.a 00:08:45.212 LIB libspdk_event_scsi.a 00:08:45.212 SO libspdk_event_ublk.so.3.0 00:08:45.212 SO libspdk_event_nbd.so.6.0 00:08:45.212 SO libspdk_event_scsi.so.6.0 00:08:45.212 LIB libspdk_event_nvmf.a 00:08:45.212 SYMLINK libspdk_event_ublk.so 00:08:45.212 SYMLINK libspdk_event_nbd.so 00:08:45.212 SO libspdk_event_nvmf.so.6.0 00:08:45.212 SYMLINK libspdk_event_scsi.so 00:08:45.212 SYMLINK libspdk_event_nvmf.so 00:08:45.472 CC module/event/subsystems/iscsi/iscsi.o 00:08:45.472 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:45.733 LIB libspdk_event_vhost_scsi.a 00:08:45.733 LIB libspdk_event_iscsi.a 00:08:45.733 SO libspdk_event_vhost_scsi.so.3.0 00:08:45.733 SO libspdk_event_iscsi.so.6.0 00:08:45.733 SYMLINK libspdk_event_vhost_scsi.so 00:08:45.994 SYMLINK libspdk_event_iscsi.so 00:08:45.994 SO libspdk.so.6.0 00:08:45.994 SYMLINK libspdk.so 00:08:46.568 CC app/spdk_nvme_identify/identify.o 00:08:46.568 CC app/trace_record/trace_record.o 00:08:46.568 CXX app/trace/trace.o 00:08:46.568 CC app/spdk_top/spdk_top.o 00:08:46.568 CC app/spdk_lspci/spdk_lspci.o 00:08:46.568 CC test/rpc_client/rpc_client_test.o 00:08:46.568 TEST_HEADER include/spdk/accel.h 00:08:46.568 CC app/spdk_nvme_perf/perf.o 00:08:46.568 CC app/spdk_nvme_discover/discovery_aer.o 00:08:46.568 TEST_HEADER include/spdk/accel_module.h 00:08:46.568 TEST_HEADER include/spdk/assert.h 00:08:46.568 TEST_HEADER include/spdk/barrier.h 00:08:46.568 TEST_HEADER include/spdk/base64.h 00:08:46.568 TEST_HEADER include/spdk/bdev.h 00:08:46.568 TEST_HEADER include/spdk/bdev_module.h 00:08:46.568 TEST_HEADER include/spdk/bit_array.h 00:08:46.568 TEST_HEADER include/spdk/bdev_zone.h 00:08:46.568 TEST_HEADER include/spdk/bit_pool.h 00:08:46.568 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:46.568 TEST_HEADER include/spdk/blob_bdev.h 00:08:46.568 TEST_HEADER include/spdk/blobfs.h 00:08:46.568 TEST_HEADER include/spdk/blob.h 00:08:46.568 TEST_HEADER include/spdk/conf.h 00:08:46.568 TEST_HEADER include/spdk/config.h 00:08:46.568 TEST_HEADER include/spdk/crc16.h 00:08:46.568 TEST_HEADER include/spdk/cpuset.h 00:08:46.568 TEST_HEADER include/spdk/crc32.h 00:08:46.568 TEST_HEADER include/spdk/crc64.h 00:08:46.568 TEST_HEADER include/spdk/dif.h 00:08:46.568 TEST_HEADER include/spdk/dma.h 00:08:46.568 TEST_HEADER include/spdk/endian.h 00:08:46.568 CC app/spdk_dd/spdk_dd.o 00:08:46.568 TEST_HEADER include/spdk/env_dpdk.h 00:08:46.568 TEST_HEADER include/spdk/event.h 00:08:46.568 TEST_HEADER include/spdk/env.h 00:08:46.568 TEST_HEADER include/spdk/fd.h 00:08:46.568 TEST_HEADER include/spdk/fd_group.h 00:08:46.568 CC app/iscsi_tgt/iscsi_tgt.o 00:08:46.568 CC app/nvmf_tgt/nvmf_main.o 00:08:46.568 TEST_HEADER include/spdk/file.h 00:08:46.568 TEST_HEADER include/spdk/fsdev_module.h 00:08:46.568 TEST_HEADER include/spdk/fsdev.h 00:08:46.568 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:46.568 TEST_HEADER include/spdk/gpt_spec.h 00:08:46.568 TEST_HEADER include/spdk/ftl.h 00:08:46.568 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:46.568 TEST_HEADER include/spdk/hexlify.h 00:08:46.568 TEST_HEADER include/spdk/histogram_data.h 00:08:46.568 TEST_HEADER include/spdk/idxd.h 00:08:46.568 TEST_HEADER include/spdk/ioat.h 00:08:46.568 TEST_HEADER include/spdk/idxd_spec.h 00:08:46.568 TEST_HEADER include/spdk/init.h 00:08:46.568 TEST_HEADER include/spdk/ioat_spec.h 00:08:46.568 TEST_HEADER include/spdk/iscsi_spec.h 00:08:46.568 TEST_HEADER include/spdk/json.h 00:08:46.568 TEST_HEADER include/spdk/jsonrpc.h 00:08:46.568 TEST_HEADER include/spdk/keyring.h 00:08:46.568 TEST_HEADER include/spdk/likely.h 00:08:46.568 TEST_HEADER include/spdk/keyring_module.h 00:08:46.568 TEST_HEADER include/spdk/lvol.h 00:08:46.568 TEST_HEADER include/spdk/log.h 00:08:46.568 TEST_HEADER include/spdk/md5.h 00:08:46.568 TEST_HEADER include/spdk/memory.h 00:08:46.568 TEST_HEADER include/spdk/mmio.h 00:08:46.568 TEST_HEADER include/spdk/nbd.h 00:08:46.568 TEST_HEADER include/spdk/notify.h 00:08:46.568 TEST_HEADER include/spdk/net.h 00:08:46.568 TEST_HEADER include/spdk/nvme.h 00:08:46.568 CC app/spdk_tgt/spdk_tgt.o 00:08:46.568 TEST_HEADER include/spdk/nvme_intel.h 00:08:46.568 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:46.568 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:46.568 TEST_HEADER include/spdk/nvme_spec.h 00:08:46.568 TEST_HEADER include/spdk/nvme_zns.h 00:08:46.568 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:46.568 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:46.568 TEST_HEADER include/spdk/nvmf_spec.h 00:08:46.568 TEST_HEADER include/spdk/nvmf.h 00:08:46.568 TEST_HEADER include/spdk/nvmf_transport.h 00:08:46.568 TEST_HEADER include/spdk/opal.h 00:08:46.568 TEST_HEADER include/spdk/opal_spec.h 00:08:46.568 TEST_HEADER include/spdk/pci_ids.h 00:08:46.568 TEST_HEADER include/spdk/queue.h 00:08:46.568 TEST_HEADER include/spdk/reduce.h 00:08:46.568 TEST_HEADER include/spdk/pipe.h 00:08:46.569 TEST_HEADER include/spdk/rpc.h 00:08:46.569 TEST_HEADER include/spdk/scheduler.h 00:08:46.569 TEST_HEADER include/spdk/scsi.h 00:08:46.569 TEST_HEADER include/spdk/scsi_spec.h 00:08:46.569 TEST_HEADER include/spdk/sock.h 00:08:46.569 TEST_HEADER include/spdk/stdinc.h 00:08:46.569 TEST_HEADER include/spdk/thread.h 00:08:46.569 TEST_HEADER include/spdk/string.h 00:08:46.569 TEST_HEADER include/spdk/trace_parser.h 00:08:46.569 TEST_HEADER include/spdk/trace.h 00:08:46.569 TEST_HEADER include/spdk/tree.h 00:08:46.569 TEST_HEADER include/spdk/ublk.h 00:08:46.569 TEST_HEADER include/spdk/util.h 00:08:46.569 TEST_HEADER include/spdk/uuid.h 00:08:46.569 TEST_HEADER include/spdk/version.h 00:08:46.569 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:46.569 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:46.569 TEST_HEADER include/spdk/vhost.h 00:08:46.569 TEST_HEADER include/spdk/vmd.h 00:08:46.569 TEST_HEADER include/spdk/xor.h 00:08:46.569 CXX test/cpp_headers/accel.o 00:08:46.569 TEST_HEADER include/spdk/zipf.h 00:08:46.569 CXX test/cpp_headers/accel_module.o 00:08:46.569 CXX test/cpp_headers/assert.o 00:08:46.569 CXX test/cpp_headers/barrier.o 00:08:46.569 CXX test/cpp_headers/base64.o 00:08:46.569 CXX test/cpp_headers/bdev.o 00:08:46.569 CXX test/cpp_headers/bdev_module.o 00:08:46.569 CXX test/cpp_headers/bdev_zone.o 00:08:46.569 CXX test/cpp_headers/bit_array.o 00:08:46.569 CXX test/cpp_headers/bit_pool.o 00:08:46.569 CXX test/cpp_headers/blob_bdev.o 00:08:46.569 CXX test/cpp_headers/blob.o 00:08:46.569 CXX test/cpp_headers/conf.o 00:08:46.569 CXX test/cpp_headers/blobfs_bdev.o 00:08:46.569 CXX test/cpp_headers/blobfs.o 00:08:46.569 CXX test/cpp_headers/cpuset.o 00:08:46.569 CXX test/cpp_headers/config.o 00:08:46.569 CXX test/cpp_headers/crc16.o 00:08:46.569 CXX test/cpp_headers/crc32.o 00:08:46.569 CC examples/util/zipf/zipf.o 00:08:46.569 CXX test/cpp_headers/crc64.o 00:08:46.569 CXX test/cpp_headers/dif.o 00:08:46.569 CXX test/cpp_headers/endian.o 00:08:46.569 CXX test/cpp_headers/dma.o 00:08:46.569 CXX test/cpp_headers/env.o 00:08:46.569 CXX test/cpp_headers/env_dpdk.o 00:08:46.569 CXX test/cpp_headers/event.o 00:08:46.569 CXX test/cpp_headers/fd_group.o 00:08:46.569 CXX test/cpp_headers/file.o 00:08:46.569 CXX test/cpp_headers/fsdev.o 00:08:46.569 CXX test/cpp_headers/fd.o 00:08:46.569 CXX test/cpp_headers/fsdev_module.o 00:08:46.569 CXX test/cpp_headers/gpt_spec.o 00:08:46.569 CXX test/cpp_headers/ftl.o 00:08:46.569 CXX test/cpp_headers/hexlify.o 00:08:46.569 CXX test/cpp_headers/fuse_dispatcher.o 00:08:46.569 CXX test/cpp_headers/idxd_spec.o 00:08:46.569 CXX test/cpp_headers/histogram_data.o 00:08:46.569 CXX test/cpp_headers/idxd.o 00:08:46.569 CXX test/cpp_headers/init.o 00:08:46.569 CC examples/ioat/verify/verify.o 00:08:46.569 CXX test/cpp_headers/iscsi_spec.o 00:08:46.569 CXX test/cpp_headers/ioat.o 00:08:46.569 CXX test/cpp_headers/ioat_spec.o 00:08:46.569 CC examples/ioat/perf/perf.o 00:08:46.832 CXX test/cpp_headers/json.o 00:08:46.832 CXX test/cpp_headers/keyring.o 00:08:46.832 CXX test/cpp_headers/keyring_module.o 00:08:46.832 CXX test/cpp_headers/jsonrpc.o 00:08:46.832 CXX test/cpp_headers/log.o 00:08:46.832 CXX test/cpp_headers/lvol.o 00:08:46.832 CXX test/cpp_headers/likely.o 00:08:46.832 CXX test/cpp_headers/md5.o 00:08:46.832 CXX test/cpp_headers/mmio.o 00:08:46.832 CC test/app/jsoncat/jsoncat.o 00:08:46.832 CXX test/cpp_headers/memory.o 00:08:46.832 CXX test/cpp_headers/nbd.o 00:08:46.832 CXX test/cpp_headers/net.o 00:08:46.832 CC test/app/histogram_perf/histogram_perf.o 00:08:46.832 CC test/app/stub/stub.o 00:08:46.832 LINK spdk_lspci 00:08:46.832 CXX test/cpp_headers/nvme.o 00:08:46.832 CXX test/cpp_headers/notify.o 00:08:46.832 CXX test/cpp_headers/nvme_intel.o 00:08:46.832 CXX test/cpp_headers/nvme_ocssd.o 00:08:46.832 CC test/thread/poller_perf/poller_perf.o 00:08:46.832 CC app/fio/nvme/fio_plugin.o 00:08:46.832 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:46.832 CXX test/cpp_headers/nvme_spec.o 00:08:46.832 CXX test/cpp_headers/nvme_zns.o 00:08:46.832 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:46.832 CXX test/cpp_headers/nvmf_cmd.o 00:08:46.832 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:46.832 CXX test/cpp_headers/nvmf.o 00:08:46.832 CXX test/cpp_headers/opal.o 00:08:46.832 CXX test/cpp_headers/nvmf_spec.o 00:08:46.832 CC test/env/vtophys/vtophys.o 00:08:46.832 CXX test/cpp_headers/opal_spec.o 00:08:46.832 CXX test/cpp_headers/nvmf_transport.o 00:08:46.832 CXX test/cpp_headers/pipe.o 00:08:46.832 CXX test/cpp_headers/queue.o 00:08:46.832 CXX test/cpp_headers/pci_ids.o 00:08:46.832 CXX test/cpp_headers/reduce.o 00:08:46.832 CXX test/cpp_headers/rpc.o 00:08:46.832 CXX test/cpp_headers/scheduler.o 00:08:46.832 CC test/app/bdev_svc/bdev_svc.o 00:08:46.832 CXX test/cpp_headers/scsi.o 00:08:46.832 CXX test/cpp_headers/stdinc.o 00:08:46.832 CC test/env/pci/pci_ut.o 00:08:46.832 CXX test/cpp_headers/string.o 00:08:46.832 CXX test/cpp_headers/scsi_spec.o 00:08:46.832 CXX test/cpp_headers/sock.o 00:08:46.832 CC test/env/memory/memory_ut.o 00:08:46.832 CXX test/cpp_headers/trace.o 00:08:46.832 CXX test/cpp_headers/trace_parser.o 00:08:46.832 CXX test/cpp_headers/thread.o 00:08:46.832 CXX test/cpp_headers/tree.o 00:08:46.832 CXX test/cpp_headers/util.o 00:08:46.832 CXX test/cpp_headers/ublk.o 00:08:46.832 CXX test/cpp_headers/version.o 00:08:46.832 CXX test/cpp_headers/uuid.o 00:08:46.832 CXX test/cpp_headers/vfio_user_spec.o 00:08:46.832 CXX test/cpp_headers/vhost.o 00:08:46.832 CXX test/cpp_headers/vfio_user_pci.o 00:08:46.832 LINK rpc_client_test 00:08:46.832 CXX test/cpp_headers/zipf.o 00:08:46.832 CXX test/cpp_headers/vmd.o 00:08:46.832 CXX test/cpp_headers/xor.o 00:08:46.832 CC app/fio/bdev/fio_plugin.o 00:08:46.832 CC test/dma/test_dma/test_dma.o 00:08:46.832 LINK spdk_trace_record 00:08:46.832 LINK spdk_nvme_discover 00:08:46.832 LINK nvmf_tgt 00:08:46.832 LINK interrupt_tgt 00:08:46.832 LINK iscsi_tgt 00:08:47.092 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:47.092 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:47.092 CC test/env/mem_callbacks/mem_callbacks.o 00:08:47.092 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:47.092 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:47.092 LINK spdk_tgt 00:08:47.390 LINK zipf 00:08:47.390 LINK spdk_trace 00:08:47.390 LINK poller_perf 00:08:47.390 LINK spdk_dd 00:08:47.390 LINK jsoncat 00:08:47.390 LINK histogram_perf 00:08:47.390 LINK bdev_svc 00:08:47.390 LINK env_dpdk_post_init 00:08:47.390 LINK stub 00:08:47.390 LINK vtophys 00:08:47.390 LINK verify 00:08:47.390 LINK ioat_perf 00:08:47.711 LINK test_dma 00:08:47.711 LINK spdk_nvme_identify 00:08:47.711 LINK nvme_fuzz 00:08:47.711 CC app/vhost/vhost.o 00:08:47.711 LINK vhost_fuzz 00:08:47.711 LINK pci_ut 00:08:47.711 CC examples/vmd/lsvmd/lsvmd.o 00:08:47.711 CC examples/sock/hello_world/hello_sock.o 00:08:47.711 CC examples/idxd/perf/perf.o 00:08:47.711 CC examples/vmd/led/led.o 00:08:47.711 CC examples/thread/thread/thread_ex.o 00:08:47.711 LINK spdk_bdev 00:08:47.711 LINK spdk_nvme 00:08:47.971 CC test/event/reactor/reactor.o 00:08:47.971 CC test/event/event_perf/event_perf.o 00:08:47.971 CC test/event/reactor_perf/reactor_perf.o 00:08:47.971 CC test/event/app_repeat/app_repeat.o 00:08:47.971 LINK spdk_nvme_perf 00:08:47.971 CC test/event/scheduler/scheduler.o 00:08:47.971 LINK lsvmd 00:08:47.971 LINK spdk_top 00:08:47.971 LINK vhost 00:08:47.971 LINK led 00:08:47.971 LINK mem_callbacks 00:08:47.971 LINK hello_sock 00:08:47.971 LINK event_perf 00:08:47.971 LINK reactor 00:08:47.971 LINK reactor_perf 00:08:47.971 LINK thread 00:08:47.971 LINK app_repeat 00:08:47.971 LINK idxd_perf 00:08:48.229 LINK scheduler 00:08:48.229 CC test/nvme/simple_copy/simple_copy.o 00:08:48.229 CC test/nvme/aer/aer.o 00:08:48.229 CC test/nvme/sgl/sgl.o 00:08:48.229 CC test/nvme/overhead/overhead.o 00:08:48.229 CC test/nvme/startup/startup.o 00:08:48.229 CC test/nvme/e2edp/nvme_dp.o 00:08:48.229 CC test/nvme/connect_stress/connect_stress.o 00:08:48.229 CC test/nvme/compliance/nvme_compliance.o 00:08:48.229 CC test/nvme/reset/reset.o 00:08:48.229 CC test/nvme/err_injection/err_injection.o 00:08:48.229 CC test/nvme/reserve/reserve.o 00:08:48.229 CC test/nvme/cuse/cuse.o 00:08:48.229 CC test/nvme/fused_ordering/fused_ordering.o 00:08:48.229 CC test/nvme/boot_partition/boot_partition.o 00:08:48.230 CC test/nvme/fdp/fdp.o 00:08:48.230 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:48.230 CC test/accel/dif/dif.o 00:08:48.230 CC test/blobfs/mkfs/mkfs.o 00:08:48.230 LINK memory_ut 00:08:48.230 CC test/lvol/esnap/esnap.o 00:08:48.488 LINK boot_partition 00:08:48.488 LINK startup 00:08:48.488 LINK connect_stress 00:08:48.488 LINK simple_copy 00:08:48.488 LINK err_injection 00:08:48.488 LINK doorbell_aers 00:08:48.488 CC examples/nvme/abort/abort.o 00:08:48.488 CC examples/nvme/reconnect/reconnect.o 00:08:48.488 CC examples/nvme/hello_world/hello_world.o 00:08:48.488 LINK fused_ordering 00:08:48.488 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:48.488 CC examples/nvme/arbitration/arbitration.o 00:08:48.488 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:48.488 LINK mkfs 00:08:48.488 CC examples/nvme/hotplug/hotplug.o 00:08:48.488 LINK reserve 00:08:48.488 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:48.488 LINK overhead 00:08:48.488 LINK nvme_dp 00:08:48.488 LINK reset 00:08:48.488 LINK aer 00:08:48.488 LINK sgl 00:08:48.488 LINK nvme_compliance 00:08:48.488 LINK fdp 00:08:48.488 CC examples/accel/perf/accel_perf.o 00:08:48.488 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:48.747 LINK cmb_copy 00:08:48.747 CC examples/blob/cli/blobcli.o 00:08:48.747 LINK pmr_persistence 00:08:48.747 CC examples/blob/hello_world/hello_blob.o 00:08:48.747 LINK hello_world 00:08:48.747 LINK hotplug 00:08:48.747 LINK iscsi_fuzz 00:08:48.747 LINK reconnect 00:08:48.747 LINK arbitration 00:08:48.747 LINK abort 00:08:48.747 LINK dif 00:08:48.747 LINK nvme_manage 00:08:49.007 LINK hello_fsdev 00:08:49.007 LINK hello_blob 00:08:49.007 LINK accel_perf 00:08:49.007 LINK blobcli 00:08:49.268 LINK cuse 00:08:49.528 CC test/bdev/bdevio/bdevio.o 00:08:49.528 CC examples/bdev/hello_world/hello_bdev.o 00:08:49.528 CC examples/bdev/bdevperf/bdevperf.o 00:08:49.789 LINK bdevio 00:08:49.789 LINK hello_bdev 00:08:50.361 LINK bdevperf 00:08:50.932 CC examples/nvmf/nvmf/nvmf.o 00:08:51.192 LINK nvmf 00:08:53.105 LINK esnap 00:08:53.105 00:08:53.105 real 0m55.366s 00:08:53.105 user 7m48.886s 00:08:53.105 sys 4m22.589s 00:08:53.105 16:32:59 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:08:53.105 16:32:59 make -- common/autotest_common.sh@10 -- $ set +x 00:08:53.105 ************************************ 00:08:53.105 END TEST make 00:08:53.105 ************************************ 00:08:53.105 16:33:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:53.105 16:33:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:53.105 16:33:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:53.105 16:33:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.105 16:33:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:53.105 16:33:00 -- pm/common@44 -- $ pid=2847573 00:08:53.105 16:33:00 -- pm/common@50 -- $ kill -TERM 2847573 00:08:53.105 16:33:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.105 16:33:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:53.105 16:33:00 -- pm/common@44 -- $ pid=2847574 00:08:53.105 16:33:00 -- pm/common@50 -- $ kill -TERM 2847574 00:08:53.105 16:33:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.105 16:33:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:53.105 16:33:00 -- pm/common@44 -- $ pid=2847576 00:08:53.105 16:33:00 -- pm/common@50 -- $ kill -TERM 2847576 00:08:53.105 16:33:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.105 16:33:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:53.105 16:33:00 -- pm/common@44 -- $ pid=2847601 00:08:53.105 16:33:00 -- pm/common@50 -- $ sudo -E kill -TERM 2847601 00:08:53.105 16:33:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:53.105 16:33:00 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:08:53.367 16:33:00 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.367 16:33:00 -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.367 16:33:00 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.367 16:33:00 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.367 16:33:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.367 16:33:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.367 16:33:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.367 16:33:00 -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.367 16:33:00 -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.367 16:33:00 -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.367 16:33:00 -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.367 16:33:00 -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.367 16:33:00 -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.367 16:33:00 -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.367 16:33:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.367 16:33:00 -- scripts/common.sh@344 -- # case "$op" in 00:08:53.367 16:33:00 -- scripts/common.sh@345 -- # : 1 00:08:53.367 16:33:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.367 16:33:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.367 16:33:00 -- scripts/common.sh@365 -- # decimal 1 00:08:53.367 16:33:00 -- scripts/common.sh@353 -- # local d=1 00:08:53.367 16:33:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.367 16:33:00 -- scripts/common.sh@355 -- # echo 1 00:08:53.367 16:33:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.367 16:33:00 -- scripts/common.sh@366 -- # decimal 2 00:08:53.367 16:33:00 -- scripts/common.sh@353 -- # local d=2 00:08:53.367 16:33:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.367 16:33:00 -- scripts/common.sh@355 -- # echo 2 00:08:53.367 16:33:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.367 16:33:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.367 16:33:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.367 16:33:00 -- scripts/common.sh@368 -- # return 0 00:08:53.367 16:33:00 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.367 16:33:00 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.367 --rc genhtml_branch_coverage=1 00:08:53.367 --rc genhtml_function_coverage=1 00:08:53.367 --rc genhtml_legend=1 00:08:53.367 --rc geninfo_all_blocks=1 00:08:53.367 --rc geninfo_unexecuted_blocks=1 00:08:53.367 00:08:53.367 ' 00:08:53.367 16:33:00 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.367 --rc genhtml_branch_coverage=1 00:08:53.367 --rc genhtml_function_coverage=1 00:08:53.367 --rc genhtml_legend=1 00:08:53.367 --rc geninfo_all_blocks=1 00:08:53.367 --rc geninfo_unexecuted_blocks=1 00:08:53.367 00:08:53.367 ' 00:08:53.367 16:33:00 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.367 --rc genhtml_branch_coverage=1 00:08:53.367 --rc genhtml_function_coverage=1 00:08:53.367 --rc genhtml_legend=1 00:08:53.367 --rc geninfo_all_blocks=1 00:08:53.367 --rc geninfo_unexecuted_blocks=1 00:08:53.367 00:08:53.367 ' 00:08:53.367 16:33:00 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.368 --rc genhtml_branch_coverage=1 00:08:53.368 --rc genhtml_function_coverage=1 00:08:53.368 --rc genhtml_legend=1 00:08:53.368 --rc geninfo_all_blocks=1 00:08:53.368 --rc geninfo_unexecuted_blocks=1 00:08:53.368 00:08:53.368 ' 00:08:53.368 16:33:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.368 16:33:00 -- nvmf/common.sh@7 -- # uname -s 00:08:53.368 16:33:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.368 16:33:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.368 16:33:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.368 16:33:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.368 16:33:00 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.368 16:33:00 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:53.368 16:33:00 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.368 16:33:00 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:53.368 16:33:00 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.368 16:33:00 -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.368 16:33:00 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.368 16:33:00 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:53.368 16:33:00 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:53.368 16:33:00 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.368 16:33:00 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.368 16:33:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.368 16:33:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.368 16:33:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.368 16:33:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.368 16:33:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.368 16:33:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.368 16:33:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.368 16:33:00 -- paths/export.sh@5 -- # export PATH 00:08:53.368 16:33:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.368 16:33:00 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:53.368 16:33:00 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:53.368 16:33:00 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:53.368 16:33:00 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:53.368 16:33:00 -- nvmf/common.sh@50 -- # : 0 00:08:53.368 16:33:00 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:53.368 16:33:00 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:53.368 16:33:00 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:53.368 16:33:00 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.368 16:33:00 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.368 16:33:00 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:53.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:53.368 16:33:00 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:53.368 16:33:00 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:53.368 16:33:00 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:53.368 16:33:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:53.368 16:33:00 -- spdk/autotest.sh@32 -- # uname -s 00:08:53.368 16:33:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:53.368 16:33:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:53.368 16:33:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:08:53.368 16:33:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:08:53.368 16:33:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:08:53.368 16:33:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:53.368 16:33:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:53.368 16:33:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:53.368 16:33:00 -- spdk/autotest.sh@48 -- # udevadm_pid=2913116 00:08:53.368 16:33:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:53.368 16:33:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:53.368 16:33:00 -- pm/common@17 -- # local monitor 00:08:53.368 16:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.368 16:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.368 16:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.368 16:33:00 -- pm/common@21 -- # date +%s 00:08:53.368 16:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.368 16:33:00 -- pm/common@21 -- # date +%s 00:08:53.368 16:33:00 -- pm/common@25 -- # sleep 1 00:08:53.368 16:33:00 -- pm/common@21 -- # date +%s 00:08:53.368 16:33:00 -- pm/common@21 -- # date +%s 00:08:53.368 16:33:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820780 00:08:53.368 16:33:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820780 00:08:53.368 16:33:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820780 00:08:53.368 16:33:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820780 00:08:53.368 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820780_collect-cpu-load.pm.log 00:08:53.368 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820780_collect-vmstat.pm.log 00:08:53.368 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820780_collect-cpu-temp.pm.log 00:08:53.368 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820780_collect-bmc-pm.bmc.pm.log 00:08:54.312 16:33:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:54.312 16:33:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:54.312 16:33:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.312 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:54.312 16:33:01 -- spdk/autotest.sh@59 -- # create_test_list 00:08:54.312 16:33:01 -- common/autotest_common.sh@750 -- # xtrace_disable 00:08:54.312 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:54.573 16:33:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:08:54.573 16:33:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:54.573 16:33:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:54.573 16:33:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:08:54.573 16:33:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:54.573 16:33:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:54.573 16:33:01 -- common/autotest_common.sh@1455 -- # uname 00:08:54.573 16:33:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:54.573 16:33:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:54.573 16:33:01 -- common/autotest_common.sh@1475 -- # uname 00:08:54.573 16:33:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:54.573 16:33:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:54.573 16:33:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:54.573 lcov: LCOV version 1.15 00:08:54.573 16:33:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:09:09.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:09.488 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:27.610 16:33:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:27.610 16:33:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.610 16:33:31 -- common/autotest_common.sh@10 -- # set +x 00:09:27.610 16:33:31 -- spdk/autotest.sh@78 -- # rm -f 00:09:27.610 16:33:31 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:28.182 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:09:28.182 0000:65:00.0 (144d a80a): Already using the nvme driver 00:09:28.442 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:09:28.442 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:09:28.442 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:09:28.442 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:09:28.442 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:09:28.442 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:09:28.442 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:09:28.443 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:09:28.703 16:33:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:28.703 16:33:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:28.703 16:33:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:28.703 16:33:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:28.703 16:33:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:28.703 16:33:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:28.703 16:33:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:28.703 16:33:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:28.703 16:33:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:28.703 16:33:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:28.703 16:33:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:28.703 16:33:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:28.703 16:33:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:28.703 16:33:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:28.703 16:33:35 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:28.703 No valid GPT data, bailing 00:09:28.703 16:33:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:28.965 16:33:35 -- scripts/common.sh@394 -- # pt= 00:09:28.965 16:33:35 -- scripts/common.sh@395 -- # return 1 00:09:28.965 16:33:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:28.965 1+0 records in 00:09:28.965 1+0 records out 00:09:28.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00210703 s, 498 MB/s 00:09:28.965 16:33:35 -- spdk/autotest.sh@105 -- # sync 00:09:28.965 16:33:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:28.965 16:33:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:28.965 16:33:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:37.113 16:33:43 -- spdk/autotest.sh@111 -- # uname -s 00:09:37.113 16:33:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:37.113 16:33:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:37.113 16:33:43 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:40.420 Hugepages 00:09:40.420 node hugesize free / total 00:09:40.420 node0 1048576kB 0 / 0 00:09:40.420 node0 2048kB 0 / 0 00:09:40.420 node1 1048576kB 0 / 0 00:09:40.420 node1 2048kB 0 / 0 00:09:40.420 00:09:40.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:40.420 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:09:40.420 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:09:40.420 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:09:40.420 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:09:40.420 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:09:40.420 16:33:46 -- spdk/autotest.sh@117 -- # uname -s 00:09:40.420 16:33:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:40.420 16:33:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:40.420 16:33:46 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:43.727 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:09:43.727 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:09:45.644 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:09:45.644 16:33:52 -- common/autotest_common.sh@1515 -- # sleep 1 00:09:46.588 16:33:53 -- common/autotest_common.sh@1516 -- # bdfs=() 00:09:46.588 16:33:53 -- common/autotest_common.sh@1516 -- # local bdfs 00:09:46.588 16:33:53 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:09:46.588 16:33:53 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:09:46.588 16:33:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:46.588 16:33:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:46.588 16:33:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.588 16:33:53 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:46.588 16:33:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:46.848 16:33:53 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:09:46.848 16:33:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:09:46.848 16:33:53 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:50.154 Waiting for block devices as requested 00:09:50.154 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:09:50.154 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:09:50.154 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:09:50.415 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:09:50.415 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:09:50.415 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:09:50.676 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:09:50.676 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:09:50.676 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:09:50.937 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:09:50.937 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:09:50.937 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:09:51.197 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:09:51.198 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:09:51.198 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:09:51.198 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:09:51.458 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:09:51.719 16:33:58 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:51.719 16:33:58 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:09:51.719 16:33:58 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:09:51.719 16:33:58 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:09:51.719 16:33:58 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:51.719 16:33:58 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:51.719 16:33:58 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:09:51.719 16:33:58 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:51.719 16:33:58 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:51.719 16:33:58 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:09:51.719 16:33:58 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:51.719 16:33:58 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:51.719 16:33:58 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:51.719 16:33:58 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:51.719 16:33:58 -- common/autotest_common.sh@1541 -- # continue 00:09:51.719 16:33:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:51.719 16:33:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.719 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:09:51.719 16:33:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:51.719 16:33:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.719 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:09:51.719 16:33:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:55.021 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:09:55.283 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:09:55.857 16:34:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:55.857 16:34:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.857 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 16:34:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:55.857 16:34:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:09:55.857 16:34:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:09:55.857 16:34:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:09:55.857 16:34:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:09:55.857 16:34:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:09:55.857 16:34:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:09:55.857 16:34:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:09:55.857 16:34:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:55.857 16:34:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:55.857 16:34:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:55.857 16:34:02 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:55.857 16:34:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:55.857 16:34:02 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:09:55.857 16:34:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:09:55.857 16:34:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:55.857 16:34:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:09:55.857 16:34:02 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:09:55.857 16:34:02 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:09:55.857 16:34:02 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:09:55.857 16:34:02 -- common/autotest_common.sh@1570 -- # return 0 00:09:55.857 16:34:02 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:09:55.857 16:34:02 -- common/autotest_common.sh@1578 -- # return 0 00:09:55.857 16:34:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:55.857 16:34:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:55.857 16:34:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:55.857 16:34:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:55.857 16:34:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:55.857 16:34:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.857 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 16:34:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:55.857 16:34:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:55.857 16:34:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:55.857 16:34:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.857 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 ************************************ 00:09:55.857 START TEST env 00:09:55.857 ************************************ 00:09:55.857 16:34:02 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:56.119 * Looking for test storage... 00:09:56.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:09:56.119 16:34:02 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:56.119 16:34:02 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:56.119 16:34:02 env -- common/autotest_common.sh@1691 -- # lcov --version 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:56.119 16:34:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.119 16:34:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.119 16:34:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.119 16:34:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.119 16:34:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.119 16:34:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.119 16:34:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.119 16:34:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.119 16:34:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.119 16:34:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.119 16:34:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.119 16:34:03 env -- scripts/common.sh@344 -- # case "$op" in 00:09:56.119 16:34:03 env -- scripts/common.sh@345 -- # : 1 00:09:56.119 16:34:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.119 16:34:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.119 16:34:03 env -- scripts/common.sh@365 -- # decimal 1 00:09:56.119 16:34:03 env -- scripts/common.sh@353 -- # local d=1 00:09:56.119 16:34:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.119 16:34:03 env -- scripts/common.sh@355 -- # echo 1 00:09:56.119 16:34:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.119 16:34:03 env -- scripts/common.sh@366 -- # decimal 2 00:09:56.119 16:34:03 env -- scripts/common.sh@353 -- # local d=2 00:09:56.119 16:34:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.119 16:34:03 env -- scripts/common.sh@355 -- # echo 2 00:09:56.119 16:34:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.119 16:34:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.119 16:34:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.119 16:34:03 env -- scripts/common.sh@368 -- # return 0 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.119 --rc genhtml_branch_coverage=1 00:09:56.119 --rc genhtml_function_coverage=1 00:09:56.119 --rc genhtml_legend=1 00:09:56.119 --rc geninfo_all_blocks=1 00:09:56.119 --rc geninfo_unexecuted_blocks=1 00:09:56.119 00:09:56.119 ' 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.119 --rc genhtml_branch_coverage=1 00:09:56.119 --rc genhtml_function_coverage=1 00:09:56.119 --rc genhtml_legend=1 00:09:56.119 --rc geninfo_all_blocks=1 00:09:56.119 --rc geninfo_unexecuted_blocks=1 00:09:56.119 00:09:56.119 ' 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.119 --rc genhtml_branch_coverage=1 00:09:56.119 --rc genhtml_function_coverage=1 00:09:56.119 --rc genhtml_legend=1 00:09:56.119 --rc geninfo_all_blocks=1 00:09:56.119 --rc geninfo_unexecuted_blocks=1 00:09:56.119 00:09:56.119 ' 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:56.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.119 --rc genhtml_branch_coverage=1 00:09:56.119 --rc genhtml_function_coverage=1 00:09:56.119 --rc genhtml_legend=1 00:09:56.119 --rc geninfo_all_blocks=1 00:09:56.119 --rc geninfo_unexecuted_blocks=1 00:09:56.119 00:09:56.119 ' 00:09:56.119 16:34:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.119 16:34:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.119 16:34:03 env -- common/autotest_common.sh@10 -- # set +x 00:09:56.119 ************************************ 00:09:56.119 START TEST env_memory 00:09:56.119 ************************************ 00:09:56.119 16:34:03 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:56.119 00:09:56.119 00:09:56.119 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.119 http://cunit.sourceforge.net/ 00:09:56.119 00:09:56.119 00:09:56.119 Suite: memory 00:09:56.119 Test: alloc and free memory map ...[2024-11-05 16:34:03.151480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:56.119 passed 00:09:56.119 Test: mem map translation ...[2024-11-05 16:34:03.176817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:56.119 [2024-11-05 16:34:03.176837] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:56.119 [2024-11-05 16:34:03.176884] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:56.120 [2024-11-05 16:34:03.176893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:56.381 passed 00:09:56.381 Test: mem map registration ...[2024-11-05 16:34:03.232001] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:56.381 [2024-11-05 16:34:03.232019] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:56.381 passed 00:09:56.381 Test: mem map adjacent registrations ...passed 00:09:56.381 00:09:56.381 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.381 suites 1 1 n/a 0 0 00:09:56.381 tests 4 4 4 0 0 00:09:56.381 asserts 152 152 152 0 n/a 00:09:56.381 00:09:56.381 Elapsed time = 0.191 seconds 00:09:56.381 00:09:56.381 real 0m0.206s 00:09:56.381 user 0m0.194s 00:09:56.381 sys 0m0.011s 00:09:56.381 16:34:03 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.381 16:34:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:56.381 ************************************ 00:09:56.381 END TEST env_memory 00:09:56.381 ************************************ 00:09:56.381 16:34:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:56.381 16:34:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.381 16:34:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.381 16:34:03 env -- common/autotest_common.sh@10 -- # set +x 00:09:56.381 ************************************ 00:09:56.381 START TEST env_vtophys 00:09:56.381 ************************************ 00:09:56.381 16:34:03 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:56.381 EAL: lib.eal log level changed from notice to debug 00:09:56.381 EAL: Detected lcore 0 as core 0 on socket 0 00:09:56.381 EAL: Detected lcore 1 as core 1 on socket 0 00:09:56.381 EAL: Detected lcore 2 as core 2 on socket 0 00:09:56.381 EAL: Detected lcore 3 as core 3 on socket 0 00:09:56.381 EAL: Detected lcore 4 as core 4 on socket 0 00:09:56.381 EAL: Detected lcore 5 as core 5 on socket 0 00:09:56.381 EAL: Detected lcore 6 as core 6 on socket 0 00:09:56.381 EAL: Detected lcore 7 as core 7 on socket 0 00:09:56.381 EAL: Detected lcore 8 as core 8 on socket 0 00:09:56.381 EAL: Detected lcore 9 as core 9 on socket 0 00:09:56.381 EAL: Detected lcore 10 as core 10 on socket 0 00:09:56.381 EAL: Detected lcore 11 as core 11 on socket 0 00:09:56.381 EAL: Detected lcore 12 as core 12 on socket 0 00:09:56.381 EAL: Detected lcore 13 as core 13 on socket 0 00:09:56.381 EAL: Detected lcore 14 as core 14 on socket 0 00:09:56.381 EAL: Detected lcore 15 as core 15 on socket 0 00:09:56.381 EAL: Detected lcore 16 as core 16 on socket 0 00:09:56.381 EAL: Detected lcore 17 as core 17 on socket 0 00:09:56.381 EAL: Detected lcore 18 as core 18 on socket 0 00:09:56.381 EAL: Detected lcore 19 as core 19 on socket 0 00:09:56.381 EAL: Detected lcore 20 as core 20 on socket 0 00:09:56.381 EAL: Detected lcore 21 as core 21 on socket 0 00:09:56.381 EAL: Detected lcore 22 as core 22 on socket 0 00:09:56.381 EAL: Detected lcore 23 as core 23 on socket 0 00:09:56.381 EAL: Detected lcore 24 as core 24 on socket 0 00:09:56.381 EAL: Detected lcore 25 as core 25 on socket 0 00:09:56.381 EAL: Detected lcore 26 as core 26 on socket 0 00:09:56.381 EAL: Detected lcore 27 as core 27 on socket 0 00:09:56.381 EAL: Detected lcore 28 as core 28 on socket 0 00:09:56.381 EAL: Detected lcore 29 as core 29 on socket 0 00:09:56.381 EAL: Detected lcore 30 as core 30 on socket 0 00:09:56.381 EAL: Detected lcore 31 as core 31 on socket 0 00:09:56.381 EAL: Detected lcore 32 as core 32 on socket 0 00:09:56.381 EAL: Detected lcore 33 as core 33 on socket 0 00:09:56.381 EAL: Detected lcore 34 as core 34 on socket 0 00:09:56.381 EAL: Detected lcore 35 as core 35 on socket 0 00:09:56.381 EAL: Detected lcore 36 as core 0 on socket 1 00:09:56.381 EAL: Detected lcore 37 as core 1 on socket 1 00:09:56.381 EAL: Detected lcore 38 as core 2 on socket 1 00:09:56.381 EAL: Detected lcore 39 as core 3 on socket 1 00:09:56.381 EAL: Detected lcore 40 as core 4 on socket 1 00:09:56.381 EAL: Detected lcore 41 as core 5 on socket 1 00:09:56.381 EAL: Detected lcore 42 as core 6 on socket 1 00:09:56.381 EAL: Detected lcore 43 as core 7 on socket 1 00:09:56.381 EAL: Detected lcore 44 as core 8 on socket 1 00:09:56.381 EAL: Detected lcore 45 as core 9 on socket 1 00:09:56.381 EAL: Detected lcore 46 as core 10 on socket 1 00:09:56.381 EAL: Detected lcore 47 as core 11 on socket 1 00:09:56.381 EAL: Detected lcore 48 as core 12 on socket 1 00:09:56.381 EAL: Detected lcore 49 as core 13 on socket 1 00:09:56.381 EAL: Detected lcore 50 as core 14 on socket 1 00:09:56.381 EAL: Detected lcore 51 as core 15 on socket 1 00:09:56.381 EAL: Detected lcore 52 as core 16 on socket 1 00:09:56.381 EAL: Detected lcore 53 as core 17 on socket 1 00:09:56.381 EAL: Detected lcore 54 as core 18 on socket 1 00:09:56.381 EAL: Detected lcore 55 as core 19 on socket 1 00:09:56.381 EAL: Detected lcore 56 as core 20 on socket 1 00:09:56.381 EAL: Detected lcore 57 as core 21 on socket 1 00:09:56.381 EAL: Detected lcore 58 as core 22 on socket 1 00:09:56.381 EAL: Detected lcore 59 as core 23 on socket 1 00:09:56.381 EAL: Detected lcore 60 as core 24 on socket 1 00:09:56.381 EAL: Detected lcore 61 as core 25 on socket 1 00:09:56.381 EAL: Detected lcore 62 as core 26 on socket 1 00:09:56.381 EAL: Detected lcore 63 as core 27 on socket 1 00:09:56.381 EAL: Detected lcore 64 as core 28 on socket 1 00:09:56.381 EAL: Detected lcore 65 as core 29 on socket 1 00:09:56.381 EAL: Detected lcore 66 as core 30 on socket 1 00:09:56.381 EAL: Detected lcore 67 as core 31 on socket 1 00:09:56.381 EAL: Detected lcore 68 as core 32 on socket 1 00:09:56.381 EAL: Detected lcore 69 as core 33 on socket 1 00:09:56.381 EAL: Detected lcore 70 as core 34 on socket 1 00:09:56.381 EAL: Detected lcore 71 as core 35 on socket 1 00:09:56.381 EAL: Detected lcore 72 as core 0 on socket 0 00:09:56.381 EAL: Detected lcore 73 as core 1 on socket 0 00:09:56.381 EAL: Detected lcore 74 as core 2 on socket 0 00:09:56.381 EAL: Detected lcore 75 as core 3 on socket 0 00:09:56.381 EAL: Detected lcore 76 as core 4 on socket 0 00:09:56.381 EAL: Detected lcore 77 as core 5 on socket 0 00:09:56.381 EAL: Detected lcore 78 as core 6 on socket 0 00:09:56.381 EAL: Detected lcore 79 as core 7 on socket 0 00:09:56.381 EAL: Detected lcore 80 as core 8 on socket 0 00:09:56.381 EAL: Detected lcore 81 as core 9 on socket 0 00:09:56.381 EAL: Detected lcore 82 as core 10 on socket 0 00:09:56.381 EAL: Detected lcore 83 as core 11 on socket 0 00:09:56.381 EAL: Detected lcore 84 as core 12 on socket 0 00:09:56.381 EAL: Detected lcore 85 as core 13 on socket 0 00:09:56.381 EAL: Detected lcore 86 as core 14 on socket 0 00:09:56.381 EAL: Detected lcore 87 as core 15 on socket 0 00:09:56.382 EAL: Detected lcore 88 as core 16 on socket 0 00:09:56.382 EAL: Detected lcore 89 as core 17 on socket 0 00:09:56.382 EAL: Detected lcore 90 as core 18 on socket 0 00:09:56.382 EAL: Detected lcore 91 as core 19 on socket 0 00:09:56.382 EAL: Detected lcore 92 as core 20 on socket 0 00:09:56.382 EAL: Detected lcore 93 as core 21 on socket 0 00:09:56.382 EAL: Detected lcore 94 as core 22 on socket 0 00:09:56.382 EAL: Detected lcore 95 as core 23 on socket 0 00:09:56.382 EAL: Detected lcore 96 as core 24 on socket 0 00:09:56.382 EAL: Detected lcore 97 as core 25 on socket 0 00:09:56.382 EAL: Detected lcore 98 as core 26 on socket 0 00:09:56.382 EAL: Detected lcore 99 as core 27 on socket 0 00:09:56.382 EAL: Detected lcore 100 as core 28 on socket 0 00:09:56.382 EAL: Detected lcore 101 as core 29 on socket 0 00:09:56.382 EAL: Detected lcore 102 as core 30 on socket 0 00:09:56.382 EAL: Detected lcore 103 as core 31 on socket 0 00:09:56.382 EAL: Detected lcore 104 as core 32 on socket 0 00:09:56.382 EAL: Detected lcore 105 as core 33 on socket 0 00:09:56.382 EAL: Detected lcore 106 as core 34 on socket 0 00:09:56.382 EAL: Detected lcore 107 as core 35 on socket 0 00:09:56.382 EAL: Detected lcore 108 as core 0 on socket 1 00:09:56.382 EAL: Detected lcore 109 as core 1 on socket 1 00:09:56.382 EAL: Detected lcore 110 as core 2 on socket 1 00:09:56.382 EAL: Detected lcore 111 as core 3 on socket 1 00:09:56.382 EAL: Detected lcore 112 as core 4 on socket 1 00:09:56.382 EAL: Detected lcore 113 as core 5 on socket 1 00:09:56.382 EAL: Detected lcore 114 as core 6 on socket 1 00:09:56.382 EAL: Detected lcore 115 as core 7 on socket 1 00:09:56.382 EAL: Detected lcore 116 as core 8 on socket 1 00:09:56.382 EAL: Detected lcore 117 as core 9 on socket 1 00:09:56.382 EAL: Detected lcore 118 as core 10 on socket 1 00:09:56.382 EAL: Detected lcore 119 as core 11 on socket 1 00:09:56.382 EAL: Detected lcore 120 as core 12 on socket 1 00:09:56.382 EAL: Detected lcore 121 as core 13 on socket 1 00:09:56.382 EAL: Detected lcore 122 as core 14 on socket 1 00:09:56.382 EAL: Detected lcore 123 as core 15 on socket 1 00:09:56.382 EAL: Detected lcore 124 as core 16 on socket 1 00:09:56.382 EAL: Detected lcore 125 as core 17 on socket 1 00:09:56.382 EAL: Detected lcore 126 as core 18 on socket 1 00:09:56.382 EAL: Detected lcore 127 as core 19 on socket 1 00:09:56.382 EAL: Skipped lcore 128 as core 20 on socket 1 00:09:56.382 EAL: Skipped lcore 129 as core 21 on socket 1 00:09:56.382 EAL: Skipped lcore 130 as core 22 on socket 1 00:09:56.382 EAL: Skipped lcore 131 as core 23 on socket 1 00:09:56.382 EAL: Skipped lcore 132 as core 24 on socket 1 00:09:56.382 EAL: Skipped lcore 133 as core 25 on socket 1 00:09:56.382 EAL: Skipped lcore 134 as core 26 on socket 1 00:09:56.382 EAL: Skipped lcore 135 as core 27 on socket 1 00:09:56.382 EAL: Skipped lcore 136 as core 28 on socket 1 00:09:56.382 EAL: Skipped lcore 137 as core 29 on socket 1 00:09:56.382 EAL: Skipped lcore 138 as core 30 on socket 1 00:09:56.382 EAL: Skipped lcore 139 as core 31 on socket 1 00:09:56.382 EAL: Skipped lcore 140 as core 32 on socket 1 00:09:56.382 EAL: Skipped lcore 141 as core 33 on socket 1 00:09:56.382 EAL: Skipped lcore 142 as core 34 on socket 1 00:09:56.382 EAL: Skipped lcore 143 as core 35 on socket 1 00:09:56.382 EAL: Maximum logical cores by configuration: 128 00:09:56.382 EAL: Detected CPU lcores: 128 00:09:56.382 EAL: Detected NUMA nodes: 2 00:09:56.382 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:56.382 EAL: Detected shared linkage of DPDK 00:09:56.382 EAL: No shared files mode enabled, IPC will be disabled 00:09:56.382 EAL: Bus pci wants IOVA as 'DC' 00:09:56.382 EAL: Buses did not request a specific IOVA mode. 00:09:56.382 EAL: IOMMU is available, selecting IOVA as VA mode. 00:09:56.382 EAL: Selected IOVA mode 'VA' 00:09:56.382 EAL: Probing VFIO support... 00:09:56.382 EAL: IOMMU type 1 (Type 1) is supported 00:09:56.382 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:56.382 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:56.382 EAL: VFIO support initialized 00:09:56.382 EAL: Ask a virtual area of 0x2e000 bytes 00:09:56.382 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:56.382 EAL: Setting up physically contiguous memory... 00:09:56.382 EAL: Setting maximum number of open files to 524288 00:09:56.382 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:56.382 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:56.382 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:56.382 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:56.382 EAL: Ask a virtual area of 0x61000 bytes 00:09:56.382 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:56.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:56.382 EAL: Ask a virtual area of 0x400000000 bytes 00:09:56.382 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:56.382 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:56.382 EAL: Hugepages will be freed exactly as allocated. 00:09:56.382 EAL: No shared files mode enabled, IPC is disabled 00:09:56.382 EAL: No shared files mode enabled, IPC is disabled 00:09:56.382 EAL: TSC frequency is ~2400000 KHz 00:09:56.382 EAL: Main lcore 0 is ready (tid=7fc5d0665a00;cpuset=[0]) 00:09:56.382 EAL: Trying to obtain current memory policy. 00:09:56.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.382 EAL: Restoring previous memory policy: 0 00:09:56.382 EAL: request: mp_malloc_sync 00:09:56.382 EAL: No shared files mode enabled, IPC is disabled 00:09:56.382 EAL: Heap on socket 0 was expanded by 2MB 00:09:56.382 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:56.642 EAL: Mem event callback 'spdk:(nil)' registered 00:09:56.642 00:09:56.642 00:09:56.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.642 http://cunit.sourceforge.net/ 00:09:56.642 00:09:56.642 00:09:56.642 Suite: components_suite 00:09:56.642 Test: vtophys_malloc_test ...passed 00:09:56.642 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:56.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.642 EAL: Restoring previous memory policy: 4 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was expanded by 4MB 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was shrunk by 4MB 00:09:56.642 EAL: Trying to obtain current memory policy. 00:09:56.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.642 EAL: Restoring previous memory policy: 4 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was expanded by 6MB 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was shrunk by 6MB 00:09:56.642 EAL: Trying to obtain current memory policy. 00:09:56.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.642 EAL: Restoring previous memory policy: 4 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was expanded by 10MB 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was shrunk by 10MB 00:09:56.642 EAL: Trying to obtain current memory policy. 00:09:56.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.642 EAL: Restoring previous memory policy: 4 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was expanded by 18MB 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was shrunk by 18MB 00:09:56.642 EAL: Trying to obtain current memory policy. 00:09:56.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.642 EAL: Restoring previous memory policy: 4 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was expanded by 34MB 00:09:56.642 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.642 EAL: request: mp_malloc_sync 00:09:56.642 EAL: No shared files mode enabled, IPC is disabled 00:09:56.642 EAL: Heap on socket 0 was shrunk by 34MB 00:09:56.643 EAL: Trying to obtain current memory policy. 00:09:56.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.643 EAL: Restoring previous memory policy: 4 00:09:56.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.643 EAL: request: mp_malloc_sync 00:09:56.643 EAL: No shared files mode enabled, IPC is disabled 00:09:56.643 EAL: Heap on socket 0 was expanded by 66MB 00:09:56.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.643 EAL: request: mp_malloc_sync 00:09:56.643 EAL: No shared files mode enabled, IPC is disabled 00:09:56.643 EAL: Heap on socket 0 was shrunk by 66MB 00:09:56.643 EAL: Trying to obtain current memory policy. 00:09:56.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.643 EAL: Restoring previous memory policy: 4 00:09:56.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.643 EAL: request: mp_malloc_sync 00:09:56.643 EAL: No shared files mode enabled, IPC is disabled 00:09:56.643 EAL: Heap on socket 0 was expanded by 130MB 00:09:56.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.643 EAL: request: mp_malloc_sync 00:09:56.643 EAL: No shared files mode enabled, IPC is disabled 00:09:56.643 EAL: Heap on socket 0 was shrunk by 130MB 00:09:56.643 EAL: Trying to obtain current memory policy. 00:09:56.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.643 EAL: Restoring previous memory policy: 4 00:09:56.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.643 EAL: request: mp_malloc_sync 00:09:56.643 EAL: No shared files mode enabled, IPC is disabled 00:09:56.643 EAL: Heap on socket 0 was expanded by 258MB 00:09:56.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.643 EAL: request: mp_malloc_sync 00:09:56.643 EAL: No shared files mode enabled, IPC is disabled 00:09:56.643 EAL: Heap on socket 0 was shrunk by 258MB 00:09:56.643 EAL: Trying to obtain current memory policy. 00:09:56.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.903 EAL: Restoring previous memory policy: 4 00:09:56.903 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.903 EAL: request: mp_malloc_sync 00:09:56.903 EAL: No shared files mode enabled, IPC is disabled 00:09:56.903 EAL: Heap on socket 0 was expanded by 514MB 00:09:56.903 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.903 EAL: request: mp_malloc_sync 00:09:56.903 EAL: No shared files mode enabled, IPC is disabled 00:09:56.903 EAL: Heap on socket 0 was shrunk by 514MB 00:09:56.903 EAL: Trying to obtain current memory policy. 00:09:56.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:56.903 EAL: Restoring previous memory policy: 4 00:09:56.903 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.903 EAL: request: mp_malloc_sync 00:09:56.903 EAL: No shared files mode enabled, IPC is disabled 00:09:56.903 EAL: Heap on socket 0 was expanded by 1026MB 00:09:57.164 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.164 EAL: request: mp_malloc_sync 00:09:57.164 EAL: No shared files mode enabled, IPC is disabled 00:09:57.164 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:57.164 passed 00:09:57.164 00:09:57.164 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.164 suites 1 1 n/a 0 0 00:09:57.164 tests 2 2 2 0 0 00:09:57.164 asserts 497 497 497 0 n/a 00:09:57.164 00:09:57.164 Elapsed time = 0.643 seconds 00:09:57.164 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.164 EAL: request: mp_malloc_sync 00:09:57.164 EAL: No shared files mode enabled, IPC is disabled 00:09:57.164 EAL: Heap on socket 0 was shrunk by 2MB 00:09:57.164 EAL: No shared files mode enabled, IPC is disabled 00:09:57.164 EAL: No shared files mode enabled, IPC is disabled 00:09:57.164 EAL: No shared files mode enabled, IPC is disabled 00:09:57.164 00:09:57.164 real 0m0.768s 00:09:57.164 user 0m0.417s 00:09:57.164 sys 0m0.329s 00:09:57.164 16:34:04 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.164 16:34:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:57.164 ************************************ 00:09:57.164 END TEST env_vtophys 00:09:57.164 ************************************ 00:09:57.164 16:34:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:57.164 16:34:04 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:57.164 16:34:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.164 16:34:04 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.164 ************************************ 00:09:57.164 START TEST env_pci 00:09:57.164 ************************************ 00:09:57.426 16:34:04 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:57.426 00:09:57.426 00:09:57.426 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.426 http://cunit.sourceforge.net/ 00:09:57.426 00:09:57.426 00:09:57.426 Suite: pci 00:09:57.426 Test: pci_hook ...[2024-11-05 16:34:04.246783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2932590 has claimed it 00:09:57.426 EAL: Cannot find device (10000:00:01.0) 00:09:57.426 EAL: Failed to attach device on primary process 00:09:57.426 passed 00:09:57.426 00:09:57.426 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.426 suites 1 1 n/a 0 0 00:09:57.426 tests 1 1 1 0 0 00:09:57.426 asserts 25 25 25 0 n/a 00:09:57.426 00:09:57.426 Elapsed time = 0.031 seconds 00:09:57.426 00:09:57.426 real 0m0.052s 00:09:57.426 user 0m0.015s 00:09:57.426 sys 0m0.036s 00:09:57.426 16:34:04 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.426 16:34:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:57.426 ************************************ 00:09:57.426 END TEST env_pci 00:09:57.426 ************************************ 00:09:57.426 16:34:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:57.426 16:34:04 env -- env/env.sh@15 -- # uname 00:09:57.426 16:34:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:57.426 16:34:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:57.426 16:34:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:57.426 16:34:04 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:57.426 16:34:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.426 16:34:04 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.426 ************************************ 00:09:57.426 START TEST env_dpdk_post_init 00:09:57.426 ************************************ 00:09:57.426 16:34:04 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:57.426 EAL: Detected CPU lcores: 128 00:09:57.426 EAL: Detected NUMA nodes: 2 00:09:57.426 EAL: Detected shared linkage of DPDK 00:09:57.426 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:57.426 EAL: Selected IOVA mode 'VA' 00:09:57.426 EAL: VFIO support initialized 00:09:57.426 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:57.687 EAL: Using IOMMU type 1 (Type 1) 00:09:57.687 EAL: Ignore mapping IO port bar(1) 00:09:57.687 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:09:57.948 EAL: Ignore mapping IO port bar(1) 00:09:57.948 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:09:58.209 EAL: Ignore mapping IO port bar(1) 00:09:58.209 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:09:58.469 EAL: Ignore mapping IO port bar(1) 00:09:58.469 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:09:58.469 EAL: Ignore mapping IO port bar(1) 00:09:58.729 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:09:58.729 EAL: Ignore mapping IO port bar(1) 00:09:58.990 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:09:58.990 EAL: Ignore mapping IO port bar(1) 00:09:59.250 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:09:59.250 EAL: Ignore mapping IO port bar(1) 00:09:59.250 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:09:59.511 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:09:59.771 EAL: Ignore mapping IO port bar(1) 00:09:59.771 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:10:00.038 EAL: Ignore mapping IO port bar(1) 00:10:00.038 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:10:00.038 EAL: Ignore mapping IO port bar(1) 00:10:00.343 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:10:00.343 EAL: Ignore mapping IO port bar(1) 00:10:00.680 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:10:00.680 EAL: Ignore mapping IO port bar(1) 00:10:00.680 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:10:00.940 EAL: Ignore mapping IO port bar(1) 00:10:00.940 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:10:00.940 EAL: Ignore mapping IO port bar(1) 00:10:01.201 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:10:01.201 EAL: Ignore mapping IO port bar(1) 00:10:01.461 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:10:01.461 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:10:01.461 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:10:01.461 Starting DPDK initialization... 00:10:01.461 Starting SPDK post initialization... 00:10:01.461 SPDK NVMe probe 00:10:01.461 Attaching to 0000:65:00.0 00:10:01.461 Attached to 0000:65:00.0 00:10:01.461 Cleaning up... 00:10:03.374 00:10:03.374 real 0m5.731s 00:10:03.374 user 0m0.092s 00:10:03.374 sys 0m0.184s 00:10:03.374 16:34:10 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.374 16:34:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:03.374 ************************************ 00:10:03.374 END TEST env_dpdk_post_init 00:10:03.374 ************************************ 00:10:03.374 16:34:10 env -- env/env.sh@26 -- # uname 00:10:03.374 16:34:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:03.374 16:34:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:03.374 16:34:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:03.374 16:34:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.374 16:34:10 env -- common/autotest_common.sh@10 -- # set +x 00:10:03.374 ************************************ 00:10:03.374 START TEST env_mem_callbacks 00:10:03.374 ************************************ 00:10:03.374 16:34:10 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:03.374 EAL: Detected CPU lcores: 128 00:10:03.374 EAL: Detected NUMA nodes: 2 00:10:03.374 EAL: Detected shared linkage of DPDK 00:10:03.374 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:03.374 EAL: Selected IOVA mode 'VA' 00:10:03.374 EAL: VFIO support initialized 00:10:03.374 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:03.374 00:10:03.374 00:10:03.374 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.374 http://cunit.sourceforge.net/ 00:10:03.374 00:10:03.374 00:10:03.374 Suite: memory 00:10:03.374 Test: test ... 00:10:03.374 register 0x200000200000 2097152 00:10:03.374 malloc 3145728 00:10:03.374 register 0x200000400000 4194304 00:10:03.374 buf 0x200000500000 len 3145728 PASSED 00:10:03.374 malloc 64 00:10:03.374 buf 0x2000004fff40 len 64 PASSED 00:10:03.374 malloc 4194304 00:10:03.374 register 0x200000800000 6291456 00:10:03.374 buf 0x200000a00000 len 4194304 PASSED 00:10:03.374 free 0x200000500000 3145728 00:10:03.374 free 0x2000004fff40 64 00:10:03.374 unregister 0x200000400000 4194304 PASSED 00:10:03.374 free 0x200000a00000 4194304 00:10:03.374 unregister 0x200000800000 6291456 PASSED 00:10:03.374 malloc 8388608 00:10:03.374 register 0x200000400000 10485760 00:10:03.374 buf 0x200000600000 len 8388608 PASSED 00:10:03.374 free 0x200000600000 8388608 00:10:03.374 unregister 0x200000400000 10485760 PASSED 00:10:03.374 passed 00:10:03.374 00:10:03.374 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.374 suites 1 1 n/a 0 0 00:10:03.374 tests 1 1 1 0 0 00:10:03.374 asserts 15 15 15 0 n/a 00:10:03.374 00:10:03.374 Elapsed time = 0.007 seconds 00:10:03.374 00:10:03.374 real 0m0.063s 00:10:03.374 user 0m0.018s 00:10:03.374 sys 0m0.045s 00:10:03.374 16:34:10 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.374 16:34:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:03.374 ************************************ 00:10:03.374 END TEST env_mem_callbacks 00:10:03.374 ************************************ 00:10:03.374 00:10:03.374 real 0m7.430s 00:10:03.374 user 0m0.999s 00:10:03.374 sys 0m0.983s 00:10:03.374 16:34:10 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.374 16:34:10 env -- common/autotest_common.sh@10 -- # set +x 00:10:03.374 ************************************ 00:10:03.374 END TEST env 00:10:03.374 ************************************ 00:10:03.374 16:34:10 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:10:03.374 16:34:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:03.374 16:34:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.374 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:10:03.374 ************************************ 00:10:03.374 START TEST rpc 00:10:03.374 ************************************ 00:10:03.374 16:34:10 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:10:03.636 * Looking for test storage... 00:10:03.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:03.636 16:34:10 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.636 16:34:10 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.636 16:34:10 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.636 16:34:10 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.636 16:34:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.636 16:34:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.636 16:34:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.636 16:34:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.636 16:34:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.636 16:34:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.636 16:34:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.636 16:34:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.636 16:34:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.636 16:34:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.636 16:34:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.636 16:34:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:03.637 16:34:10 rpc -- scripts/common.sh@345 -- # : 1 00:10:03.637 16:34:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.637 16:34:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.637 16:34:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:03.637 16:34:10 rpc -- scripts/common.sh@353 -- # local d=1 00:10:03.637 16:34:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.637 16:34:10 rpc -- scripts/common.sh@355 -- # echo 1 00:10:03.637 16:34:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.637 16:34:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:03.637 16:34:10 rpc -- scripts/common.sh@353 -- # local d=2 00:10:03.637 16:34:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.637 16:34:10 rpc -- scripts/common.sh@355 -- # echo 2 00:10:03.637 16:34:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.637 16:34:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.637 16:34:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.637 16:34:10 rpc -- scripts/common.sh@368 -- # return 0 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.637 --rc genhtml_branch_coverage=1 00:10:03.637 --rc genhtml_function_coverage=1 00:10:03.637 --rc genhtml_legend=1 00:10:03.637 --rc geninfo_all_blocks=1 00:10:03.637 --rc geninfo_unexecuted_blocks=1 00:10:03.637 00:10:03.637 ' 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.637 --rc genhtml_branch_coverage=1 00:10:03.637 --rc genhtml_function_coverage=1 00:10:03.637 --rc genhtml_legend=1 00:10:03.637 --rc geninfo_all_blocks=1 00:10:03.637 --rc geninfo_unexecuted_blocks=1 00:10:03.637 00:10:03.637 ' 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.637 --rc genhtml_branch_coverage=1 00:10:03.637 --rc genhtml_function_coverage=1 00:10:03.637 --rc genhtml_legend=1 00:10:03.637 --rc geninfo_all_blocks=1 00:10:03.637 --rc geninfo_unexecuted_blocks=1 00:10:03.637 00:10:03.637 ' 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.637 --rc genhtml_branch_coverage=1 00:10:03.637 --rc genhtml_function_coverage=1 00:10:03.637 --rc genhtml_legend=1 00:10:03.637 --rc geninfo_all_blocks=1 00:10:03.637 --rc geninfo_unexecuted_blocks=1 00:10:03.637 00:10:03.637 ' 00:10:03.637 16:34:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2934049 00:10:03.637 16:34:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:03.637 16:34:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2934049 00:10:03.637 16:34:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@833 -- # '[' -z 2934049 ']' 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.637 16:34:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.637 [2024-11-05 16:34:10.632386] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:03.637 [2024-11-05 16:34:10.632458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934049 ] 00:10:03.898 [2024-11-05 16:34:10.707892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.898 [2024-11-05 16:34:10.749064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:03.898 [2024-11-05 16:34:10.749099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2934049' to capture a snapshot of events at runtime. 00:10:03.898 [2024-11-05 16:34:10.749107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.898 [2024-11-05 16:34:10.749114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.898 [2024-11-05 16:34:10.749120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2934049 for offline analysis/debug. 00:10:03.898 [2024-11-05 16:34:10.749693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.471 16:34:11 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.471 16:34:11 rpc -- common/autotest_common.sh@866 -- # return 0 00:10:04.471 16:34:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:04.471 16:34:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:04.471 16:34:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:04.471 16:34:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:04.471 16:34:11 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.471 16:34:11 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.471 16:34:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.471 ************************************ 00:10:04.471 START TEST rpc_integrity 00:10:04.471 ************************************ 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:04.471 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.471 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.732 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.732 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:04.732 { 00:10:04.732 "name": "Malloc0", 00:10:04.732 "aliases": [ 00:10:04.732 "7daa8aba-1b02-4caa-9a88-376c182b351b" 00:10:04.732 ], 00:10:04.732 "product_name": "Malloc disk", 00:10:04.732 "block_size": 512, 00:10:04.732 "num_blocks": 16384, 00:10:04.732 "uuid": "7daa8aba-1b02-4caa-9a88-376c182b351b", 00:10:04.732 "assigned_rate_limits": { 00:10:04.732 "rw_ios_per_sec": 0, 00:10:04.732 "rw_mbytes_per_sec": 0, 00:10:04.732 "r_mbytes_per_sec": 0, 00:10:04.732 "w_mbytes_per_sec": 0 00:10:04.732 }, 00:10:04.732 "claimed": false, 00:10:04.732 "zoned": false, 00:10:04.732 "supported_io_types": { 00:10:04.732 "read": true, 00:10:04.732 "write": true, 00:10:04.732 "unmap": true, 00:10:04.732 "flush": true, 00:10:04.733 "reset": true, 00:10:04.733 "nvme_admin": false, 00:10:04.733 "nvme_io": false, 00:10:04.733 "nvme_io_md": false, 00:10:04.733 "write_zeroes": true, 00:10:04.733 "zcopy": true, 00:10:04.733 "get_zone_info": false, 00:10:04.733 "zone_management": false, 00:10:04.733 "zone_append": false, 00:10:04.733 "compare": false, 00:10:04.733 "compare_and_write": false, 00:10:04.733 "abort": true, 00:10:04.733 "seek_hole": false, 00:10:04.733 "seek_data": false, 00:10:04.733 "copy": true, 00:10:04.733 "nvme_iov_md": false 00:10:04.733 }, 00:10:04.733 "memory_domains": [ 00:10:04.733 { 00:10:04.733 "dma_device_id": "system", 00:10:04.733 "dma_device_type": 1 00:10:04.733 }, 00:10:04.733 { 00:10:04.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.733 "dma_device_type": 2 00:10:04.733 } 00:10:04.733 ], 00:10:04.733 "driver_specific": {} 00:10:04.733 } 00:10:04.733 ]' 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.733 [2024-11-05 16:34:11.589440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:04.733 [2024-11-05 16:34:11.589472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.733 [2024-11-05 16:34:11.589484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb7da0 00:10:04.733 [2024-11-05 16:34:11.589492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.733 [2024-11-05 16:34:11.590853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.733 [2024-11-05 16:34:11.590874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:04.733 Passthru0 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:04.733 { 00:10:04.733 "name": "Malloc0", 00:10:04.733 "aliases": [ 00:10:04.733 "7daa8aba-1b02-4caa-9a88-376c182b351b" 00:10:04.733 ], 00:10:04.733 "product_name": "Malloc disk", 00:10:04.733 "block_size": 512, 00:10:04.733 "num_blocks": 16384, 00:10:04.733 "uuid": "7daa8aba-1b02-4caa-9a88-376c182b351b", 00:10:04.733 "assigned_rate_limits": { 00:10:04.733 "rw_ios_per_sec": 0, 00:10:04.733 "rw_mbytes_per_sec": 0, 00:10:04.733 "r_mbytes_per_sec": 0, 00:10:04.733 "w_mbytes_per_sec": 0 00:10:04.733 }, 00:10:04.733 "claimed": true, 00:10:04.733 "claim_type": "exclusive_write", 00:10:04.733 "zoned": false, 00:10:04.733 "supported_io_types": { 00:10:04.733 "read": true, 00:10:04.733 "write": true, 00:10:04.733 "unmap": true, 00:10:04.733 "flush": true, 00:10:04.733 "reset": true, 00:10:04.733 "nvme_admin": false, 00:10:04.733 "nvme_io": false, 00:10:04.733 "nvme_io_md": false, 00:10:04.733 "write_zeroes": true, 00:10:04.733 "zcopy": true, 00:10:04.733 "get_zone_info": false, 00:10:04.733 "zone_management": false, 00:10:04.733 "zone_append": false, 00:10:04.733 "compare": false, 00:10:04.733 "compare_and_write": false, 00:10:04.733 "abort": true, 00:10:04.733 "seek_hole": false, 00:10:04.733 "seek_data": false, 00:10:04.733 "copy": true, 00:10:04.733 "nvme_iov_md": false 00:10:04.733 }, 00:10:04.733 "memory_domains": [ 00:10:04.733 { 00:10:04.733 "dma_device_id": "system", 00:10:04.733 "dma_device_type": 1 00:10:04.733 }, 00:10:04.733 { 00:10:04.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.733 "dma_device_type": 2 00:10:04.733 } 00:10:04.733 ], 00:10:04.733 "driver_specific": {} 00:10:04.733 }, 00:10:04.733 { 00:10:04.733 "name": "Passthru0", 00:10:04.733 "aliases": [ 00:10:04.733 "a57c7e4c-ed7a-5765-a92a-88bf82a10c3c" 00:10:04.733 ], 00:10:04.733 "product_name": "passthru", 00:10:04.733 "block_size": 512, 00:10:04.733 "num_blocks": 16384, 00:10:04.733 "uuid": "a57c7e4c-ed7a-5765-a92a-88bf82a10c3c", 00:10:04.733 "assigned_rate_limits": { 00:10:04.733 "rw_ios_per_sec": 0, 00:10:04.733 "rw_mbytes_per_sec": 0, 00:10:04.733 "r_mbytes_per_sec": 0, 00:10:04.733 "w_mbytes_per_sec": 0 00:10:04.733 }, 00:10:04.733 "claimed": false, 00:10:04.733 "zoned": false, 00:10:04.733 "supported_io_types": { 00:10:04.733 "read": true, 00:10:04.733 "write": true, 00:10:04.733 "unmap": true, 00:10:04.733 "flush": true, 00:10:04.733 "reset": true, 00:10:04.733 "nvme_admin": false, 00:10:04.733 "nvme_io": false, 00:10:04.733 "nvme_io_md": false, 00:10:04.733 "write_zeroes": true, 00:10:04.733 "zcopy": true, 00:10:04.733 "get_zone_info": false, 00:10:04.733 "zone_management": false, 00:10:04.733 "zone_append": false, 00:10:04.733 "compare": false, 00:10:04.733 "compare_and_write": false, 00:10:04.733 "abort": true, 00:10:04.733 "seek_hole": false, 00:10:04.733 "seek_data": false, 00:10:04.733 "copy": true, 00:10:04.733 "nvme_iov_md": false 00:10:04.733 }, 00:10:04.733 "memory_domains": [ 00:10:04.733 { 00:10:04.733 "dma_device_id": "system", 00:10:04.733 "dma_device_type": 1 00:10:04.733 }, 00:10:04.733 { 00:10:04.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.733 "dma_device_type": 2 00:10:04.733 } 00:10:04.733 ], 00:10:04.733 "driver_specific": { 00:10:04.733 "passthru": { 00:10:04.733 "name": "Passthru0", 00:10:04.733 "base_bdev_name": "Malloc0" 00:10:04.733 } 00:10:04.733 } 00:10:04.733 } 00:10:04.733 ]' 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:04.733 16:34:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:04.733 00:10:04.733 real 0m0.278s 00:10:04.733 user 0m0.170s 00:10:04.733 sys 0m0.038s 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.733 16:34:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:04.733 ************************************ 00:10:04.733 END TEST rpc_integrity 00:10:04.733 ************************************ 00:10:04.733 16:34:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:04.733 16:34:11 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.733 16:34:11 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.733 16:34:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.994 ************************************ 00:10:04.994 START TEST rpc_plugins 00:10:04.994 ************************************ 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:10:04.994 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.994 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:04.994 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:04.994 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.994 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:04.994 { 00:10:04.994 "name": "Malloc1", 00:10:04.994 "aliases": [ 00:10:04.994 "df1c4013-8754-4ab7-8c5f-0801b0218c5f" 00:10:04.994 ], 00:10:04.994 "product_name": "Malloc disk", 00:10:04.994 "block_size": 4096, 00:10:04.994 "num_blocks": 256, 00:10:04.994 "uuid": "df1c4013-8754-4ab7-8c5f-0801b0218c5f", 00:10:04.994 "assigned_rate_limits": { 00:10:04.994 "rw_ios_per_sec": 0, 00:10:04.994 "rw_mbytes_per_sec": 0, 00:10:04.994 "r_mbytes_per_sec": 0, 00:10:04.994 "w_mbytes_per_sec": 0 00:10:04.994 }, 00:10:04.994 "claimed": false, 00:10:04.994 "zoned": false, 00:10:04.995 "supported_io_types": { 00:10:04.995 "read": true, 00:10:04.995 "write": true, 00:10:04.995 "unmap": true, 00:10:04.995 "flush": true, 00:10:04.995 "reset": true, 00:10:04.995 "nvme_admin": false, 00:10:04.995 "nvme_io": false, 00:10:04.995 "nvme_io_md": false, 00:10:04.995 "write_zeroes": true, 00:10:04.995 "zcopy": true, 00:10:04.995 "get_zone_info": false, 00:10:04.995 "zone_management": false, 00:10:04.995 "zone_append": false, 00:10:04.995 "compare": false, 00:10:04.995 "compare_and_write": false, 00:10:04.995 "abort": true, 00:10:04.995 "seek_hole": false, 00:10:04.995 "seek_data": false, 00:10:04.995 "copy": true, 00:10:04.995 "nvme_iov_md": false 00:10:04.995 }, 00:10:04.995 "memory_domains": [ 00:10:04.995 { 00:10:04.995 "dma_device_id": "system", 00:10:04.995 "dma_device_type": 1 00:10:04.995 }, 00:10:04.995 { 00:10:04.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.995 "dma_device_type": 2 00:10:04.995 } 00:10:04.995 ], 00:10:04.995 "driver_specific": {} 00:10:04.995 } 00:10:04.995 ]' 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:04.995 16:34:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:04.995 00:10:04.995 real 0m0.147s 00:10:04.995 user 0m0.093s 00:10:04.995 sys 0m0.017s 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.995 16:34:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:04.995 ************************************ 00:10:04.995 END TEST rpc_plugins 00:10:04.995 ************************************ 00:10:04.995 16:34:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:04.995 16:34:11 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.995 16:34:11 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.995 16:34:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.995 ************************************ 00:10:04.995 START TEST rpc_trace_cmd_test 00:10:04.995 ************************************ 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:04.995 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2934049", 00:10:04.995 "tpoint_group_mask": "0x8", 00:10:04.995 "iscsi_conn": { 00:10:04.995 "mask": "0x2", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "scsi": { 00:10:04.995 "mask": "0x4", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "bdev": { 00:10:04.995 "mask": "0x8", 00:10:04.995 "tpoint_mask": "0xffffffffffffffff" 00:10:04.995 }, 00:10:04.995 "nvmf_rdma": { 00:10:04.995 "mask": "0x10", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "nvmf_tcp": { 00:10:04.995 "mask": "0x20", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "ftl": { 00:10:04.995 "mask": "0x40", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "blobfs": { 00:10:04.995 "mask": "0x80", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "dsa": { 00:10:04.995 "mask": "0x200", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "thread": { 00:10:04.995 "mask": "0x400", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "nvme_pcie": { 00:10:04.995 "mask": "0x800", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "iaa": { 00:10:04.995 "mask": "0x1000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "nvme_tcp": { 00:10:04.995 "mask": "0x2000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "bdev_nvme": { 00:10:04.995 "mask": "0x4000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "sock": { 00:10:04.995 "mask": "0x8000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "blob": { 00:10:04.995 "mask": "0x10000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "bdev_raid": { 00:10:04.995 "mask": "0x20000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 }, 00:10:04.995 "scheduler": { 00:10:04.995 "mask": "0x40000", 00:10:04.995 "tpoint_mask": "0x0" 00:10:04.995 } 00:10:04.995 }' 00:10:04.995 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:05.255 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:05.256 00:10:05.256 real 0m0.226s 00:10:05.256 user 0m0.190s 00:10:05.256 sys 0m0.029s 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.256 16:34:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.256 ************************************ 00:10:05.256 END TEST rpc_trace_cmd_test 00:10:05.256 ************************************ 00:10:05.256 16:34:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:05.256 16:34:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:05.256 16:34:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:05.256 16:34:12 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.256 16:34:12 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.256 16:34:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.518 ************************************ 00:10:05.518 START TEST rpc_daemon_integrity 00:10:05.518 ************************************ 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:05.518 { 00:10:05.518 "name": "Malloc2", 00:10:05.518 "aliases": [ 00:10:05.518 "10977649-166c-4471-8bf8-fa4efc41e9f7" 00:10:05.518 ], 00:10:05.518 "product_name": "Malloc disk", 00:10:05.518 "block_size": 512, 00:10:05.518 "num_blocks": 16384, 00:10:05.518 "uuid": "10977649-166c-4471-8bf8-fa4efc41e9f7", 00:10:05.518 "assigned_rate_limits": { 00:10:05.518 "rw_ios_per_sec": 0, 00:10:05.518 "rw_mbytes_per_sec": 0, 00:10:05.518 "r_mbytes_per_sec": 0, 00:10:05.518 "w_mbytes_per_sec": 0 00:10:05.518 }, 00:10:05.518 "claimed": false, 00:10:05.518 "zoned": false, 00:10:05.518 "supported_io_types": { 00:10:05.518 "read": true, 00:10:05.518 "write": true, 00:10:05.518 "unmap": true, 00:10:05.518 "flush": true, 00:10:05.518 "reset": true, 00:10:05.518 "nvme_admin": false, 00:10:05.518 "nvme_io": false, 00:10:05.518 "nvme_io_md": false, 00:10:05.518 "write_zeroes": true, 00:10:05.518 "zcopy": true, 00:10:05.518 "get_zone_info": false, 00:10:05.518 "zone_management": false, 00:10:05.518 "zone_append": false, 00:10:05.518 "compare": false, 00:10:05.518 "compare_and_write": false, 00:10:05.518 "abort": true, 00:10:05.518 "seek_hole": false, 00:10:05.518 "seek_data": false, 00:10:05.518 "copy": true, 00:10:05.518 "nvme_iov_md": false 00:10:05.518 }, 00:10:05.518 "memory_domains": [ 00:10:05.518 { 00:10:05.518 "dma_device_id": "system", 00:10:05.518 "dma_device_type": 1 00:10:05.518 }, 00:10:05.518 { 00:10:05.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.518 "dma_device_type": 2 00:10:05.518 } 00:10:05.518 ], 00:10:05.518 "driver_specific": {} 00:10:05.518 } 00:10:05.518 ]' 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.518 [2024-11-05 16:34:12.471834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:05.518 [2024-11-05 16:34:12.471863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.518 [2024-11-05 16:34:12.471875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xee9090 00:10:05.518 [2024-11-05 16:34:12.471882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.518 [2024-11-05 16:34:12.473186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.518 [2024-11-05 16:34:12.473207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:05.518 Passthru0 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.518 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:05.518 { 00:10:05.518 "name": "Malloc2", 00:10:05.518 "aliases": [ 00:10:05.518 "10977649-166c-4471-8bf8-fa4efc41e9f7" 00:10:05.518 ], 00:10:05.518 "product_name": "Malloc disk", 00:10:05.518 "block_size": 512, 00:10:05.518 "num_blocks": 16384, 00:10:05.518 "uuid": "10977649-166c-4471-8bf8-fa4efc41e9f7", 00:10:05.518 "assigned_rate_limits": { 00:10:05.518 "rw_ios_per_sec": 0, 00:10:05.518 "rw_mbytes_per_sec": 0, 00:10:05.518 "r_mbytes_per_sec": 0, 00:10:05.518 "w_mbytes_per_sec": 0 00:10:05.518 }, 00:10:05.518 "claimed": true, 00:10:05.518 "claim_type": "exclusive_write", 00:10:05.518 "zoned": false, 00:10:05.518 "supported_io_types": { 00:10:05.518 "read": true, 00:10:05.518 "write": true, 00:10:05.518 "unmap": true, 00:10:05.518 "flush": true, 00:10:05.518 "reset": true, 00:10:05.518 "nvme_admin": false, 00:10:05.518 "nvme_io": false, 00:10:05.518 "nvme_io_md": false, 00:10:05.518 "write_zeroes": true, 00:10:05.518 "zcopy": true, 00:10:05.518 "get_zone_info": false, 00:10:05.518 "zone_management": false, 00:10:05.518 "zone_append": false, 00:10:05.518 "compare": false, 00:10:05.518 "compare_and_write": false, 00:10:05.518 "abort": true, 00:10:05.518 "seek_hole": false, 00:10:05.518 "seek_data": false, 00:10:05.518 "copy": true, 00:10:05.518 "nvme_iov_md": false 00:10:05.518 }, 00:10:05.518 "memory_domains": [ 00:10:05.518 { 00:10:05.518 "dma_device_id": "system", 00:10:05.518 "dma_device_type": 1 00:10:05.518 }, 00:10:05.518 { 00:10:05.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.518 "dma_device_type": 2 00:10:05.518 } 00:10:05.518 ], 00:10:05.518 "driver_specific": {} 00:10:05.518 }, 00:10:05.518 { 00:10:05.518 "name": "Passthru0", 00:10:05.518 "aliases": [ 00:10:05.518 "852c59de-992e-5b31-bf77-b74aee81989f" 00:10:05.518 ], 00:10:05.518 "product_name": "passthru", 00:10:05.518 "block_size": 512, 00:10:05.518 "num_blocks": 16384, 00:10:05.518 "uuid": "852c59de-992e-5b31-bf77-b74aee81989f", 00:10:05.518 "assigned_rate_limits": { 00:10:05.518 "rw_ios_per_sec": 0, 00:10:05.518 "rw_mbytes_per_sec": 0, 00:10:05.518 "r_mbytes_per_sec": 0, 00:10:05.518 "w_mbytes_per_sec": 0 00:10:05.518 }, 00:10:05.518 "claimed": false, 00:10:05.518 "zoned": false, 00:10:05.518 "supported_io_types": { 00:10:05.518 "read": true, 00:10:05.518 "write": true, 00:10:05.518 "unmap": true, 00:10:05.518 "flush": true, 00:10:05.518 "reset": true, 00:10:05.518 "nvme_admin": false, 00:10:05.518 "nvme_io": false, 00:10:05.518 "nvme_io_md": false, 00:10:05.518 "write_zeroes": true, 00:10:05.518 "zcopy": true, 00:10:05.518 "get_zone_info": false, 00:10:05.518 "zone_management": false, 00:10:05.518 "zone_append": false, 00:10:05.518 "compare": false, 00:10:05.518 "compare_and_write": false, 00:10:05.518 "abort": true, 00:10:05.518 "seek_hole": false, 00:10:05.518 "seek_data": false, 00:10:05.518 "copy": true, 00:10:05.518 "nvme_iov_md": false 00:10:05.518 }, 00:10:05.518 "memory_domains": [ 00:10:05.518 { 00:10:05.519 "dma_device_id": "system", 00:10:05.519 "dma_device_type": 1 00:10:05.519 }, 00:10:05.519 { 00:10:05.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.519 "dma_device_type": 2 00:10:05.519 } 00:10:05.519 ], 00:10:05.519 "driver_specific": { 00:10:05.519 "passthru": { 00:10:05.519 "name": "Passthru0", 00:10:05.519 "base_bdev_name": "Malloc2" 00:10:05.519 } 00:10:05.519 } 00:10:05.519 } 00:10:05.519 ]' 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:05.519 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:05.780 16:34:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:05.780 00:10:05.780 real 0m0.288s 00:10:05.780 user 0m0.182s 00:10:05.780 sys 0m0.039s 00:10:05.780 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.780 16:34:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.780 ************************************ 00:10:05.780 END TEST rpc_daemon_integrity 00:10:05.780 ************************************ 00:10:05.780 16:34:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:05.780 16:34:12 rpc -- rpc/rpc.sh@84 -- # killprocess 2934049 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@952 -- # '[' -z 2934049 ']' 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@956 -- # kill -0 2934049 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@957 -- # uname 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2934049 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2934049' 00:10:05.780 killing process with pid 2934049 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@971 -- # kill 2934049 00:10:05.780 16:34:12 rpc -- common/autotest_common.sh@976 -- # wait 2934049 00:10:06.041 00:10:06.041 real 0m2.552s 00:10:06.041 user 0m3.304s 00:10:06.041 sys 0m0.718s 00:10:06.041 16:34:12 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.041 16:34:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.041 ************************************ 00:10:06.041 END TEST rpc 00:10:06.041 ************************************ 00:10:06.041 16:34:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:06.041 16:34:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:06.041 16:34:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.041 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:10:06.041 ************************************ 00:10:06.041 START TEST skip_rpc 00:10:06.041 ************************************ 00:10:06.041 16:34:12 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:06.041 * Looking for test storage... 00:10:06.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:10:06.041 16:34:13 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.041 16:34:13 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.041 16:34:13 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.303 16:34:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.303 --rc genhtml_branch_coverage=1 00:10:06.303 --rc genhtml_function_coverage=1 00:10:06.303 --rc genhtml_legend=1 00:10:06.303 --rc geninfo_all_blocks=1 00:10:06.303 --rc geninfo_unexecuted_blocks=1 00:10:06.303 00:10:06.303 ' 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.303 --rc genhtml_branch_coverage=1 00:10:06.303 --rc genhtml_function_coverage=1 00:10:06.303 --rc genhtml_legend=1 00:10:06.303 --rc geninfo_all_blocks=1 00:10:06.303 --rc geninfo_unexecuted_blocks=1 00:10:06.303 00:10:06.303 ' 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.303 --rc genhtml_branch_coverage=1 00:10:06.303 --rc genhtml_function_coverage=1 00:10:06.303 --rc genhtml_legend=1 00:10:06.303 --rc geninfo_all_blocks=1 00:10:06.303 --rc geninfo_unexecuted_blocks=1 00:10:06.303 00:10:06.303 ' 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.303 --rc genhtml_branch_coverage=1 00:10:06.303 --rc genhtml_function_coverage=1 00:10:06.303 --rc genhtml_legend=1 00:10:06.303 --rc geninfo_all_blocks=1 00:10:06.303 --rc geninfo_unexecuted_blocks=1 00:10:06.303 00:10:06.303 ' 00:10:06.303 16:34:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:06.303 16:34:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:06.303 16:34:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.303 16:34:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.303 ************************************ 00:10:06.303 START TEST skip_rpc 00:10:06.303 ************************************ 00:10:06.303 16:34:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:10:06.303 16:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2934798 00:10:06.303 16:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:06.303 16:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:06.303 16:34:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:06.303 [2024-11-05 16:34:13.304675] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:06.303 [2024-11-05 16:34:13.304741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934798 ] 00:10:06.563 [2024-11-05 16:34:13.379425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.563 [2024-11-05 16:34:13.421978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2934798 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2934798 ']' 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2934798 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2934798 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2934798' 00:10:11.848 killing process with pid 2934798 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2934798 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2934798 00:10:11.848 00:10:11.848 real 0m5.284s 00:10:11.848 user 0m5.094s 00:10:11.848 sys 0m0.241s 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.848 16:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.848 ************************************ 00:10:11.848 END TEST skip_rpc 00:10:11.848 ************************************ 00:10:11.848 16:34:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:11.848 16:34:18 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:11.848 16:34:18 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.848 16:34:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.848 ************************************ 00:10:11.848 START TEST skip_rpc_with_json 00:10:11.848 ************************************ 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2935937 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2935937 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2935937 ']' 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.848 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:11.849 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.849 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:11.849 16:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:11.849 [2024-11-05 16:34:18.653790] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:11.849 [2024-11-05 16:34:18.653843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2935937 ] 00:10:11.849 [2024-11-05 16:34:18.724286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.849 [2024-11-05 16:34:18.761376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:12.420 [2024-11-05 16:34:19.436241] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:12.420 request: 00:10:12.420 { 00:10:12.420 "trtype": "tcp", 00:10:12.420 "method": "nvmf_get_transports", 00:10:12.420 "req_id": 1 00:10:12.420 } 00:10:12.420 Got JSON-RPC error response 00:10:12.420 response: 00:10:12.420 { 00:10:12.420 "code": -19, 00:10:12.420 "message": "No such device" 00:10:12.420 } 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:12.420 [2024-11-05 16:34:19.448366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.420 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:12.682 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.682 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:12.682 { 00:10:12.682 "subsystems": [ 00:10:12.682 { 00:10:12.682 "subsystem": "fsdev", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "fsdev_set_opts", 00:10:12.682 "params": { 00:10:12.682 "fsdev_io_pool_size": 65535, 00:10:12.682 "fsdev_io_cache_size": 256 00:10:12.682 } 00:10:12.682 } 00:10:12.682 ] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "vfio_user_target", 00:10:12.682 "config": null 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "keyring", 00:10:12.682 "config": [] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "iobuf", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "iobuf_set_options", 00:10:12.682 "params": { 00:10:12.682 "small_pool_count": 8192, 00:10:12.682 "large_pool_count": 1024, 00:10:12.682 "small_bufsize": 8192, 00:10:12.682 "large_bufsize": 135168, 00:10:12.682 "enable_numa": false 00:10:12.682 } 00:10:12.682 } 00:10:12.682 ] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "sock", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "sock_set_default_impl", 00:10:12.682 "params": { 00:10:12.682 "impl_name": "posix" 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "sock_impl_set_options", 00:10:12.682 "params": { 00:10:12.682 "impl_name": "ssl", 00:10:12.682 "recv_buf_size": 4096, 00:10:12.682 "send_buf_size": 4096, 00:10:12.682 "enable_recv_pipe": true, 00:10:12.682 "enable_quickack": false, 00:10:12.682 "enable_placement_id": 0, 00:10:12.682 "enable_zerocopy_send_server": true, 00:10:12.682 "enable_zerocopy_send_client": false, 00:10:12.682 "zerocopy_threshold": 0, 00:10:12.682 "tls_version": 0, 00:10:12.682 "enable_ktls": false 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "sock_impl_set_options", 00:10:12.682 "params": { 00:10:12.682 "impl_name": "posix", 00:10:12.682 "recv_buf_size": 2097152, 00:10:12.682 "send_buf_size": 2097152, 00:10:12.682 "enable_recv_pipe": true, 00:10:12.682 "enable_quickack": false, 00:10:12.682 "enable_placement_id": 0, 00:10:12.682 "enable_zerocopy_send_server": true, 00:10:12.682 "enable_zerocopy_send_client": false, 00:10:12.682 "zerocopy_threshold": 0, 00:10:12.682 "tls_version": 0, 00:10:12.682 "enable_ktls": false 00:10:12.682 } 00:10:12.682 } 00:10:12.682 ] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "vmd", 00:10:12.682 "config": [] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "accel", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "accel_set_options", 00:10:12.682 "params": { 00:10:12.682 "small_cache_size": 128, 00:10:12.682 "large_cache_size": 16, 00:10:12.682 "task_count": 2048, 00:10:12.682 "sequence_count": 2048, 00:10:12.682 "buf_count": 2048 00:10:12.682 } 00:10:12.682 } 00:10:12.682 ] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "bdev", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "bdev_set_options", 00:10:12.682 "params": { 00:10:12.682 "bdev_io_pool_size": 65535, 00:10:12.682 "bdev_io_cache_size": 256, 00:10:12.682 "bdev_auto_examine": true, 00:10:12.682 "iobuf_small_cache_size": 128, 00:10:12.682 "iobuf_large_cache_size": 16 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "bdev_raid_set_options", 00:10:12.682 "params": { 00:10:12.682 "process_window_size_kb": 1024, 00:10:12.682 "process_max_bandwidth_mb_sec": 0 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "bdev_iscsi_set_options", 00:10:12.682 "params": { 00:10:12.682 "timeout_sec": 30 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "bdev_nvme_set_options", 00:10:12.682 "params": { 00:10:12.682 "action_on_timeout": "none", 00:10:12.682 "timeout_us": 0, 00:10:12.682 "timeout_admin_us": 0, 00:10:12.682 "keep_alive_timeout_ms": 10000, 00:10:12.682 "arbitration_burst": 0, 00:10:12.682 "low_priority_weight": 0, 00:10:12.682 "medium_priority_weight": 0, 00:10:12.682 "high_priority_weight": 0, 00:10:12.682 "nvme_adminq_poll_period_us": 10000, 00:10:12.682 "nvme_ioq_poll_period_us": 0, 00:10:12.682 "io_queue_requests": 0, 00:10:12.682 "delay_cmd_submit": true, 00:10:12.682 "transport_retry_count": 4, 00:10:12.682 "bdev_retry_count": 3, 00:10:12.682 "transport_ack_timeout": 0, 00:10:12.682 "ctrlr_loss_timeout_sec": 0, 00:10:12.682 "reconnect_delay_sec": 0, 00:10:12.682 "fast_io_fail_timeout_sec": 0, 00:10:12.682 "disable_auto_failback": false, 00:10:12.682 "generate_uuids": false, 00:10:12.682 "transport_tos": 0, 00:10:12.682 "nvme_error_stat": false, 00:10:12.682 "rdma_srq_size": 0, 00:10:12.682 "io_path_stat": false, 00:10:12.682 "allow_accel_sequence": false, 00:10:12.682 "rdma_max_cq_size": 0, 00:10:12.682 "rdma_cm_event_timeout_ms": 0, 00:10:12.682 "dhchap_digests": [ 00:10:12.682 "sha256", 00:10:12.682 "sha384", 00:10:12.682 "sha512" 00:10:12.682 ], 00:10:12.682 "dhchap_dhgroups": [ 00:10:12.682 "null", 00:10:12.682 "ffdhe2048", 00:10:12.682 "ffdhe3072", 00:10:12.682 "ffdhe4096", 00:10:12.682 "ffdhe6144", 00:10:12.682 "ffdhe8192" 00:10:12.682 ] 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "bdev_nvme_set_hotplug", 00:10:12.682 "params": { 00:10:12.682 "period_us": 100000, 00:10:12.682 "enable": false 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "bdev_wait_for_examine" 00:10:12.682 } 00:10:12.682 ] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "scsi", 00:10:12.682 "config": null 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "scheduler", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "framework_set_scheduler", 00:10:12.682 "params": { 00:10:12.682 "name": "static" 00:10:12.682 } 00:10:12.682 } 00:10:12.682 ] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "vhost_scsi", 00:10:12.682 "config": [] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "vhost_blk", 00:10:12.682 "config": [] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "ublk", 00:10:12.682 "config": [] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "nbd", 00:10:12.682 "config": [] 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "subsystem": "nvmf", 00:10:12.682 "config": [ 00:10:12.682 { 00:10:12.682 "method": "nvmf_set_config", 00:10:12.682 "params": { 00:10:12.682 "discovery_filter": "match_any", 00:10:12.682 "admin_cmd_passthru": { 00:10:12.682 "identify_ctrlr": false 00:10:12.682 }, 00:10:12.682 "dhchap_digests": [ 00:10:12.682 "sha256", 00:10:12.682 "sha384", 00:10:12.682 "sha512" 00:10:12.682 ], 00:10:12.682 "dhchap_dhgroups": [ 00:10:12.682 "null", 00:10:12.682 "ffdhe2048", 00:10:12.682 "ffdhe3072", 00:10:12.682 "ffdhe4096", 00:10:12.682 "ffdhe6144", 00:10:12.682 "ffdhe8192" 00:10:12.682 ] 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "nvmf_set_max_subsystems", 00:10:12.682 "params": { 00:10:12.682 "max_subsystems": 1024 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "nvmf_set_crdt", 00:10:12.682 "params": { 00:10:12.682 "crdt1": 0, 00:10:12.682 "crdt2": 0, 00:10:12.682 "crdt3": 0 00:10:12.682 } 00:10:12.682 }, 00:10:12.682 { 00:10:12.682 "method": "nvmf_create_transport", 00:10:12.682 "params": { 00:10:12.682 "trtype": "TCP", 00:10:12.682 "max_queue_depth": 128, 00:10:12.682 "max_io_qpairs_per_ctrlr": 127, 00:10:12.682 "in_capsule_data_size": 4096, 00:10:12.682 "max_io_size": 131072, 00:10:12.682 "io_unit_size": 131072, 00:10:12.682 "max_aq_depth": 128, 00:10:12.682 "num_shared_buffers": 511, 00:10:12.682 "buf_cache_size": 4294967295, 00:10:12.682 "dif_insert_or_strip": false, 00:10:12.682 "zcopy": false, 00:10:12.682 "c2h_success": true, 00:10:12.682 "sock_priority": 0, 00:10:12.682 "abort_timeout_sec": 1, 00:10:12.682 "ack_timeout": 0, 00:10:12.683 "data_wr_pool_size": 0 00:10:12.683 } 00:10:12.683 } 00:10:12.683 ] 00:10:12.683 }, 00:10:12.683 { 00:10:12.683 "subsystem": "iscsi", 00:10:12.683 "config": [ 00:10:12.683 { 00:10:12.683 "method": "iscsi_set_options", 00:10:12.683 "params": { 00:10:12.683 "node_base": "iqn.2016-06.io.spdk", 00:10:12.683 "max_sessions": 128, 00:10:12.683 "max_connections_per_session": 2, 00:10:12.683 "max_queue_depth": 64, 00:10:12.683 "default_time2wait": 2, 00:10:12.683 "default_time2retain": 20, 00:10:12.683 "first_burst_length": 8192, 00:10:12.683 "immediate_data": true, 00:10:12.683 "allow_duplicated_isid": false, 00:10:12.683 "error_recovery_level": 0, 00:10:12.683 "nop_timeout": 60, 00:10:12.683 "nop_in_interval": 30, 00:10:12.683 "disable_chap": false, 00:10:12.683 "require_chap": false, 00:10:12.683 "mutual_chap": false, 00:10:12.683 "chap_group": 0, 00:10:12.683 "max_large_datain_per_connection": 64, 00:10:12.683 "max_r2t_per_connection": 4, 00:10:12.683 "pdu_pool_size": 36864, 00:10:12.683 "immediate_data_pool_size": 16384, 00:10:12.683 "data_out_pool_size": 2048 00:10:12.683 } 00:10:12.683 } 00:10:12.683 ] 00:10:12.683 } 00:10:12.683 ] 00:10:12.683 } 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2935937 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2935937 ']' 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2935937 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2935937 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2935937' 00:10:12.683 killing process with pid 2935937 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2935937 00:10:12.683 16:34:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2935937 00:10:12.943 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2936164 00:10:12.943 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:12.943 16:34:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2936164 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2936164 ']' 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2936164 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2936164 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2936164' 00:10:18.229 killing process with pid 2936164 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2936164 00:10:18.229 16:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2936164 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:18.229 00:10:18.229 real 0m6.581s 00:10:18.229 user 0m6.502s 00:10:18.229 sys 0m0.534s 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:18.229 ************************************ 00:10:18.229 END TEST skip_rpc_with_json 00:10:18.229 ************************************ 00:10:18.229 16:34:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:18.229 16:34:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:18.229 16:34:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.229 16:34:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.229 ************************************ 00:10:18.229 START TEST skip_rpc_with_delay 00:10:18.229 ************************************ 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:18.229 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:18.489 [2024-11-05 16:34:25.316202] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:18.489 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:10:18.489 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.489 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.489 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.489 00:10:18.489 real 0m0.079s 00:10:18.489 user 0m0.045s 00:10:18.489 sys 0m0.033s 00:10:18.489 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.489 16:34:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:18.489 ************************************ 00:10:18.489 END TEST skip_rpc_with_delay 00:10:18.489 ************************************ 00:10:18.489 16:34:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:18.489 16:34:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:18.489 16:34:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:18.489 16:34:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:18.489 16:34:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.489 16:34:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.489 ************************************ 00:10:18.489 START TEST exit_on_failed_rpc_init 00:10:18.489 ************************************ 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2937348 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2937348 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2937348 ']' 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:18.489 16:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:18.489 [2024-11-05 16:34:25.475549] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:18.489 [2024-11-05 16:34:25.475611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937348 ] 00:10:18.489 [2024-11-05 16:34:25.550341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.749 [2024-11-05 16:34:25.592685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:19.321 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:19.321 [2024-11-05 16:34:26.333974] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:19.321 [2024-11-05 16:34:26.334029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937413 ] 00:10:19.583 [2024-11-05 16:34:26.419782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.583 [2024-11-05 16:34:26.455854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.583 [2024-11-05 16:34:26.455905] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:19.583 [2024-11-05 16:34:26.455915] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:19.583 [2024-11-05 16:34:26.455921] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2937348 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2937348 ']' 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2937348 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2937348 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2937348' 00:10:19.583 killing process with pid 2937348 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2937348 00:10:19.583 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2937348 00:10:19.844 00:10:19.844 real 0m1.350s 00:10:19.844 user 0m1.610s 00:10:19.844 sys 0m0.356s 00:10:19.844 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.844 16:34:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:19.844 ************************************ 00:10:19.844 END TEST exit_on_failed_rpc_init 00:10:19.844 ************************************ 00:10:19.844 16:34:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:19.844 00:10:19.844 real 0m13.811s 00:10:19.844 user 0m13.474s 00:10:19.844 sys 0m1.486s 00:10:19.844 16:34:26 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.844 16:34:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.844 ************************************ 00:10:19.844 END TEST skip_rpc 00:10:19.844 ************************************ 00:10:19.844 16:34:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:19.844 16:34:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:19.844 16:34:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:19.844 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:10:19.844 ************************************ 00:10:19.844 START TEST rpc_client 00:10:19.844 ************************************ 00:10:19.844 16:34:26 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:20.106 * Looking for test storage... 00:10:20.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:10:20.106 16:34:26 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.106 16:34:26 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.106 16:34:26 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.106 16:34:27 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.106 16:34:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:20.106 16:34:27 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.106 16:34:27 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.106 --rc genhtml_branch_coverage=1 00:10:20.106 --rc genhtml_function_coverage=1 00:10:20.106 --rc genhtml_legend=1 00:10:20.106 --rc geninfo_all_blocks=1 00:10:20.106 --rc geninfo_unexecuted_blocks=1 00:10:20.106 00:10:20.106 ' 00:10:20.106 16:34:27 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.106 --rc genhtml_branch_coverage=1 00:10:20.106 --rc genhtml_function_coverage=1 00:10:20.106 --rc genhtml_legend=1 00:10:20.106 --rc geninfo_all_blocks=1 00:10:20.106 --rc geninfo_unexecuted_blocks=1 00:10:20.106 00:10:20.106 ' 00:10:20.106 16:34:27 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.106 --rc genhtml_branch_coverage=1 00:10:20.106 --rc genhtml_function_coverage=1 00:10:20.106 --rc genhtml_legend=1 00:10:20.106 --rc geninfo_all_blocks=1 00:10:20.106 --rc geninfo_unexecuted_blocks=1 00:10:20.106 00:10:20.106 ' 00:10:20.106 16:34:27 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.106 --rc genhtml_branch_coverage=1 00:10:20.106 --rc genhtml_function_coverage=1 00:10:20.106 --rc genhtml_legend=1 00:10:20.106 --rc geninfo_all_blocks=1 00:10:20.106 --rc geninfo_unexecuted_blocks=1 00:10:20.106 00:10:20.106 ' 00:10:20.107 16:34:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:20.107 OK 00:10:20.107 16:34:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:20.107 00:10:20.107 real 0m0.212s 00:10:20.107 user 0m0.121s 00:10:20.107 sys 0m0.102s 00:10:20.107 16:34:27 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.107 16:34:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:20.107 ************************************ 00:10:20.107 END TEST rpc_client 00:10:20.107 ************************************ 00:10:20.107 16:34:27 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:20.107 16:34:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:20.107 16:34:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.107 16:34:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.107 ************************************ 00:10:20.107 START TEST json_config 00:10:20.107 ************************************ 00:10:20.107 16:34:27 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.369 16:34:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.369 16:34:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.369 16:34:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.369 16:34:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.369 16:34:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.369 16:34:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:20.369 16:34:27 json_config -- scripts/common.sh@345 -- # : 1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.369 16:34:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.369 16:34:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@353 -- # local d=1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.369 16:34:27 json_config -- scripts/common.sh@355 -- # echo 1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.369 16:34:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@353 -- # local d=2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.369 16:34:27 json_config -- scripts/common.sh@355 -- # echo 2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.369 16:34:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.369 16:34:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.369 16:34:27 json_config -- scripts/common.sh@368 -- # return 0 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.369 --rc genhtml_branch_coverage=1 00:10:20.369 --rc genhtml_function_coverage=1 00:10:20.369 --rc genhtml_legend=1 00:10:20.369 --rc geninfo_all_blocks=1 00:10:20.369 --rc geninfo_unexecuted_blocks=1 00:10:20.369 00:10:20.369 ' 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.369 --rc genhtml_branch_coverage=1 00:10:20.369 --rc genhtml_function_coverage=1 00:10:20.369 --rc genhtml_legend=1 00:10:20.369 --rc geninfo_all_blocks=1 00:10:20.369 --rc geninfo_unexecuted_blocks=1 00:10:20.369 00:10:20.369 ' 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.369 --rc genhtml_branch_coverage=1 00:10:20.369 --rc genhtml_function_coverage=1 00:10:20.369 --rc genhtml_legend=1 00:10:20.369 --rc geninfo_all_blocks=1 00:10:20.369 --rc geninfo_unexecuted_blocks=1 00:10:20.369 00:10:20.369 ' 00:10:20.369 16:34:27 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.369 --rc genhtml_branch_coverage=1 00:10:20.369 --rc genhtml_function_coverage=1 00:10:20.369 --rc genhtml_legend=1 00:10:20.369 --rc geninfo_all_blocks=1 00:10:20.369 --rc geninfo_unexecuted_blocks=1 00:10:20.369 00:10:20.369 ' 00:10:20.369 16:34:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.369 16:34:27 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.369 16:34:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.369 16:34:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.370 16:34:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.370 16:34:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.370 16:34:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.370 16:34:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.370 16:34:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.370 16:34:27 json_config -- paths/export.sh@5 -- # export PATH 00:10:20.370 16:34:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:20.370 16:34:27 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:20.370 16:34:27 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:20.370 16:34:27 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@50 -- # : 0 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:20.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:20.370 16:34:27 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:10:20.370 INFO: JSON configuration test init 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.370 16:34:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:10:20.370 16:34:27 json_config -- json_config/common.sh@9 -- # local app=target 00:10:20.370 16:34:27 json_config -- json_config/common.sh@10 -- # shift 00:10:20.370 16:34:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:20.370 16:34:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:20.370 16:34:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:20.370 16:34:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.370 16:34:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.370 16:34:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2937819 00:10:20.370 16:34:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:20.370 Waiting for target to run... 00:10:20.370 16:34:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2937819 /var/tmp/spdk_tgt.sock 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@833 -- # '[' -z 2937819 ']' 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:20.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:20.370 16:34:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.370 16:34:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.632 [2024-11-05 16:34:27.449168] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:20.632 [2024-11-05 16:34:27.449249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937819 ] 00:10:20.893 [2024-11-05 16:34:27.768950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.893 [2024-11-05 16:34:27.802600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.465 16:34:28 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:21.465 16:34:28 json_config -- common/autotest_common.sh@866 -- # return 0 00:10:21.465 16:34:28 json_config -- json_config/common.sh@26 -- # echo '' 00:10:21.465 00:10:21.465 16:34:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:10:21.465 16:34:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:10:21.465 16:34:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.465 16:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:21.465 16:34:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:10:21.465 16:34:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:10:21.465 16:34:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.465 16:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:21.465 16:34:28 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:21.465 16:34:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:10:21.465 16:34:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:22.035 16:34:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:10:22.035 16:34:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:22.035 16:34:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.035 16:34:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:22.035 16:34:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:22.036 16:34:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:22.036 16:34:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:22.036 16:34:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:10:22.036 16:34:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:10:22.036 16:34:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:10:22.036 16:34:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:22.036 16:34:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@51 -- # local get_types 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@54 -- # sort 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:10:22.036 16:34:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.036 16:34:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@62 -- # return 0 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:10:22.036 16:34:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.036 16:34:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:10:22.036 16:34:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:22.036 16:34:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:22.296 MallocForNvmf0 00:10:22.296 16:34:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:22.296 16:34:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:22.557 MallocForNvmf1 00:10:22.557 16:34:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:10:22.557 16:34:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:10:22.557 [2024-11-05 16:34:29.582473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.557 16:34:29 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.557 16:34:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.818 16:34:29 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:22.818 16:34:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:23.078 16:34:29 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:23.078 16:34:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:23.338 16:34:30 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:23.339 16:34:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:23.339 [2024-11-05 16:34:30.393038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:23.599 16:34:30 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:10:23.599 16:34:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.599 16:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.599 16:34:30 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:10:23.599 16:34:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.599 16:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.599 16:34:30 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:10:23.599 16:34:30 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:23.599 16:34:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:23.599 MallocBdevForConfigChangeCheck 00:10:23.861 16:34:30 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:10:23.861 16:34:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.861 16:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 16:34:30 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:10:23.861 16:34:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:24.122 16:34:31 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:10:24.122 INFO: shutting down applications... 00:10:24.122 16:34:31 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:10:24.122 16:34:31 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:10:24.122 16:34:31 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:10:24.122 16:34:31 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:24.383 Calling clear_iscsi_subsystem 00:10:24.383 Calling clear_nvmf_subsystem 00:10:24.383 Calling clear_nbd_subsystem 00:10:24.383 Calling clear_ublk_subsystem 00:10:24.383 Calling clear_vhost_blk_subsystem 00:10:24.383 Calling clear_vhost_scsi_subsystem 00:10:24.383 Calling clear_bdev_subsystem 00:10:24.645 16:34:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:10:24.645 16:34:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:10:24.645 16:34:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:10:24.645 16:34:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:24.645 16:34:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:24.645 16:34:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:24.906 16:34:31 json_config -- json_config/json_config.sh@352 -- # break 00:10:24.906 16:34:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:10:24.906 16:34:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:10:24.906 16:34:31 json_config -- json_config/common.sh@31 -- # local app=target 00:10:24.906 16:34:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:24.906 16:34:31 json_config -- json_config/common.sh@35 -- # [[ -n 2937819 ]] 00:10:24.906 16:34:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2937819 00:10:24.906 16:34:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:24.906 16:34:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:24.907 16:34:31 json_config -- json_config/common.sh@41 -- # kill -0 2937819 00:10:24.907 16:34:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:25.479 16:34:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:25.479 16:34:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:25.479 16:34:32 json_config -- json_config/common.sh@41 -- # kill -0 2937819 00:10:25.479 16:34:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:25.479 16:34:32 json_config -- json_config/common.sh@43 -- # break 00:10:25.479 16:34:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:25.479 16:34:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:25.479 SPDK target shutdown done 00:10:25.479 16:34:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:10:25.479 INFO: relaunching applications... 00:10:25.479 16:34:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:25.479 16:34:32 json_config -- json_config/common.sh@9 -- # local app=target 00:10:25.479 16:34:32 json_config -- json_config/common.sh@10 -- # shift 00:10:25.479 16:34:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:25.479 16:34:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:25.479 16:34:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:25.479 16:34:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:25.479 16:34:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:25.479 16:34:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2938959 00:10:25.479 16:34:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:25.479 Waiting for target to run... 00:10:25.479 16:34:32 json_config -- json_config/common.sh@25 -- # waitforlisten 2938959 /var/tmp/spdk_tgt.sock 00:10:25.479 16:34:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:25.479 16:34:32 json_config -- common/autotest_common.sh@833 -- # '[' -z 2938959 ']' 00:10:25.479 16:34:32 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:25.479 16:34:32 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:25.479 16:34:32 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:25.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:25.479 16:34:32 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:25.479 16:34:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.479 [2024-11-05 16:34:32.374409] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:25.479 [2024-11-05 16:34:32.374466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938959 ] 00:10:25.741 [2024-11-05 16:34:32.658952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.741 [2024-11-05 16:34:32.688829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.314 [2024-11-05 16:34:33.204624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.314 [2024-11-05 16:34:33.237009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:26.314 16:34:33 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:26.314 16:34:33 json_config -- common/autotest_common.sh@866 -- # return 0 00:10:26.314 16:34:33 json_config -- json_config/common.sh@26 -- # echo '' 00:10:26.314 00:10:26.314 16:34:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:10:26.314 16:34:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:26.314 INFO: Checking if target configuration is the same... 00:10:26.314 16:34:33 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:26.314 16:34:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:10:26.314 16:34:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:26.314 + '[' 2 -ne 2 ']' 00:10:26.314 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:26.314 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:26.314 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:26.314 +++ basename /dev/fd/62 00:10:26.314 ++ mktemp /tmp/62.XXX 00:10:26.314 + tmp_file_1=/tmp/62.c2h 00:10:26.314 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:26.314 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:26.314 + tmp_file_2=/tmp/spdk_tgt_config.json.HUr 00:10:26.314 + ret=0 00:10:26.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:26.575 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:26.854 + diff -u /tmp/62.c2h /tmp/spdk_tgt_config.json.HUr 00:10:26.854 + echo 'INFO: JSON config files are the same' 00:10:26.854 INFO: JSON config files are the same 00:10:26.854 + rm /tmp/62.c2h /tmp/spdk_tgt_config.json.HUr 00:10:26.854 + exit 0 00:10:26.854 16:34:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:26.854 16:34:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:26.854 INFO: changing configuration and checking if this can be detected... 00:10:26.854 16:34:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:26.854 16:34:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:26.854 16:34:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:26.854 16:34:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:26.854 16:34:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:26.854 + '[' 2 -ne 2 ']' 00:10:26.854 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:26.854 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:26.854 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:26.854 +++ basename /dev/fd/62 00:10:26.854 ++ mktemp /tmp/62.XXX 00:10:26.854 + tmp_file_1=/tmp/62.3Qq 00:10:26.854 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:26.854 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:26.854 + tmp_file_2=/tmp/spdk_tgt_config.json.6DD 00:10:26.854 + ret=0 00:10:26.854 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:27.115 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:27.377 + diff -u /tmp/62.3Qq /tmp/spdk_tgt_config.json.6DD 00:10:27.377 + ret=1 00:10:27.377 + echo '=== Start of file: /tmp/62.3Qq ===' 00:10:27.377 + cat /tmp/62.3Qq 00:10:27.377 + echo '=== End of file: /tmp/62.3Qq ===' 00:10:27.377 + echo '' 00:10:27.377 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6DD ===' 00:10:27.377 + cat /tmp/spdk_tgt_config.json.6DD 00:10:27.377 + echo '=== End of file: /tmp/spdk_tgt_config.json.6DD ===' 00:10:27.377 + echo '' 00:10:27.377 + rm /tmp/62.3Qq /tmp/spdk_tgt_config.json.6DD 00:10:27.377 + exit 1 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:27.377 INFO: configuration change detected. 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 2938959 ]] 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.377 16:34:34 json_config -- json_config/json_config.sh@330 -- # killprocess 2938959 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@952 -- # '[' -z 2938959 ']' 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@956 -- # kill -0 2938959 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@957 -- # uname 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2938959 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.377 16:34:34 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2938959' 00:10:27.377 killing process with pid 2938959 00:10:27.378 16:34:34 json_config -- common/autotest_common.sh@971 -- # kill 2938959 00:10:27.378 16:34:34 json_config -- common/autotest_common.sh@976 -- # wait 2938959 00:10:27.638 16:34:34 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:27.638 16:34:34 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:27.638 16:34:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.638 16:34:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.638 16:34:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:27.638 16:34:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:27.638 INFO: Success 00:10:27.638 00:10:27.638 real 0m7.483s 00:10:27.638 user 0m9.104s 00:10:27.638 sys 0m2.022s 00:10:27.638 16:34:34 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.638 16:34:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.638 ************************************ 00:10:27.638 END TEST json_config 00:10:27.638 ************************************ 00:10:27.638 16:34:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:27.638 16:34:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:27.638 16:34:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.638 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:10:27.900 ************************************ 00:10:27.900 START TEST json_config_extra_key 00:10:27.900 ************************************ 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.900 16:34:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.900 --rc genhtml_branch_coverage=1 00:10:27.900 --rc genhtml_function_coverage=1 00:10:27.900 --rc genhtml_legend=1 00:10:27.900 --rc geninfo_all_blocks=1 00:10:27.900 --rc geninfo_unexecuted_blocks=1 00:10:27.900 00:10:27.900 ' 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.900 --rc genhtml_branch_coverage=1 00:10:27.900 --rc genhtml_function_coverage=1 00:10:27.900 --rc genhtml_legend=1 00:10:27.900 --rc geninfo_all_blocks=1 00:10:27.900 --rc geninfo_unexecuted_blocks=1 00:10:27.900 00:10:27.900 ' 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.900 --rc genhtml_branch_coverage=1 00:10:27.900 --rc genhtml_function_coverage=1 00:10:27.900 --rc genhtml_legend=1 00:10:27.900 --rc geninfo_all_blocks=1 00:10:27.900 --rc geninfo_unexecuted_blocks=1 00:10:27.900 00:10:27.900 ' 00:10:27.900 16:34:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.900 --rc genhtml_branch_coverage=1 00:10:27.900 --rc genhtml_function_coverage=1 00:10:27.900 --rc genhtml_legend=1 00:10:27.900 --rc geninfo_all_blocks=1 00:10:27.900 --rc geninfo_unexecuted_blocks=1 00:10:27.900 00:10:27.900 ' 00:10:27.900 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:27.900 16:34:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.901 16:34:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.901 16:34:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.901 16:34:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.901 16:34:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.901 16:34:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.901 16:34:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.901 16:34:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.901 16:34:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:27.901 16:34:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:27.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:27.901 16:34:34 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:27.901 INFO: launching applications... 00:10:27.901 16:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2939631 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:27.901 Waiting for target to run... 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2939631 /var/tmp/spdk_tgt.sock 00:10:27.901 16:34:34 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2939631 ']' 00:10:27.901 16:34:34 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:27.901 16:34:34 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.901 16:34:34 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:27.901 16:34:34 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:27.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:27.901 16:34:34 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.901 16:34:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:28.162 [2024-11-05 16:34:34.993960] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:28.162 [2024-11-05 16:34:34.994041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939631 ] 00:10:28.423 [2024-11-05 16:34:35.280803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.423 [2024-11-05 16:34:35.311048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.993 16:34:35 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.993 16:34:35 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:28.993 00:10:28.993 16:34:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:28.993 INFO: shutting down applications... 00:10:28.993 16:34:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2939631 ]] 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2939631 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2939631 00:10:28.993 16:34:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2939631 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:29.254 16:34:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:29.254 SPDK target shutdown done 00:10:29.254 16:34:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:29.254 Success 00:10:29.254 00:10:29.254 real 0m1.568s 00:10:29.254 user 0m1.205s 00:10:29.254 sys 0m0.396s 00:10:29.254 16:34:36 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.254 16:34:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:29.254 ************************************ 00:10:29.254 END TEST json_config_extra_key 00:10:29.254 ************************************ 00:10:29.517 16:34:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:29.517 16:34:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:29.517 16:34:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.517 16:34:36 -- common/autotest_common.sh@10 -- # set +x 00:10:29.517 ************************************ 00:10:29.517 START TEST alias_rpc 00:10:29.517 ************************************ 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:29.517 * Looking for test storage... 00:10:29.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.517 16:34:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:29.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.517 --rc genhtml_branch_coverage=1 00:10:29.517 --rc genhtml_function_coverage=1 00:10:29.517 --rc genhtml_legend=1 00:10:29.517 --rc geninfo_all_blocks=1 00:10:29.517 --rc geninfo_unexecuted_blocks=1 00:10:29.517 00:10:29.517 ' 00:10:29.517 16:34:36 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:29.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.517 --rc genhtml_branch_coverage=1 00:10:29.517 --rc genhtml_function_coverage=1 00:10:29.517 --rc genhtml_legend=1 00:10:29.517 --rc geninfo_all_blocks=1 00:10:29.517 --rc geninfo_unexecuted_blocks=1 00:10:29.517 00:10:29.517 ' 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:29.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.518 --rc genhtml_branch_coverage=1 00:10:29.518 --rc genhtml_function_coverage=1 00:10:29.518 --rc genhtml_legend=1 00:10:29.518 --rc geninfo_all_blocks=1 00:10:29.518 --rc geninfo_unexecuted_blocks=1 00:10:29.518 00:10:29.518 ' 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:29.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.518 --rc genhtml_branch_coverage=1 00:10:29.518 --rc genhtml_function_coverage=1 00:10:29.518 --rc genhtml_legend=1 00:10:29.518 --rc geninfo_all_blocks=1 00:10:29.518 --rc geninfo_unexecuted_blocks=1 00:10:29.518 00:10:29.518 ' 00:10:29.518 16:34:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:29.518 16:34:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2939986 00:10:29.518 16:34:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2939986 00:10:29.518 16:34:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2939986 ']' 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.518 16:34:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.778 [2024-11-05 16:34:36.622080] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:29.778 [2024-11-05 16:34:36.622159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939986 ] 00:10:29.778 [2024-11-05 16:34:36.696933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.778 [2024-11-05 16:34:36.739587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.350 16:34:37 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.350 16:34:37 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:30.350 16:34:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:30.612 16:34:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2939986 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2939986 ']' 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2939986 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2939986 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2939986' 00:10:30.612 killing process with pid 2939986 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@971 -- # kill 2939986 00:10:30.612 16:34:37 alias_rpc -- common/autotest_common.sh@976 -- # wait 2939986 00:10:30.873 00:10:30.873 real 0m1.507s 00:10:30.873 user 0m1.649s 00:10:30.873 sys 0m0.413s 00:10:30.873 16:34:37 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:30.873 16:34:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.873 ************************************ 00:10:30.873 END TEST alias_rpc 00:10:30.873 ************************************ 00:10:30.873 16:34:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:30.873 16:34:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:30.873 16:34:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:30.873 16:34:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:30.873 16:34:37 -- common/autotest_common.sh@10 -- # set +x 00:10:31.134 ************************************ 00:10:31.134 START TEST spdkcli_tcp 00:10:31.134 ************************************ 00:10:31.134 16:34:37 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:31.134 * Looking for test storage... 00:10:31.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.134 16:34:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:31.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.134 --rc genhtml_branch_coverage=1 00:10:31.134 --rc genhtml_function_coverage=1 00:10:31.134 --rc genhtml_legend=1 00:10:31.134 --rc geninfo_all_blocks=1 00:10:31.134 --rc geninfo_unexecuted_blocks=1 00:10:31.134 00:10:31.134 ' 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:31.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.134 --rc genhtml_branch_coverage=1 00:10:31.134 --rc genhtml_function_coverage=1 00:10:31.134 --rc genhtml_legend=1 00:10:31.134 --rc geninfo_all_blocks=1 00:10:31.134 --rc geninfo_unexecuted_blocks=1 00:10:31.134 00:10:31.134 ' 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:31.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.134 --rc genhtml_branch_coverage=1 00:10:31.134 --rc genhtml_function_coverage=1 00:10:31.134 --rc genhtml_legend=1 00:10:31.134 --rc geninfo_all_blocks=1 00:10:31.134 --rc geninfo_unexecuted_blocks=1 00:10:31.134 00:10:31.134 ' 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:31.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.134 --rc genhtml_branch_coverage=1 00:10:31.134 --rc genhtml_function_coverage=1 00:10:31.134 --rc genhtml_legend=1 00:10:31.134 --rc geninfo_all_blocks=1 00:10:31.134 --rc geninfo_unexecuted_blocks=1 00:10:31.134 00:10:31.134 ' 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2940327 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2940327 00:10:31.134 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2940327 ']' 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.134 16:34:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.396 [2024-11-05 16:34:38.208795] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:31.396 [2024-11-05 16:34:38.208850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940327 ] 00:10:31.396 [2024-11-05 16:34:38.281192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.396 [2024-11-05 16:34:38.318763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.396 [2024-11-05 16:34:38.318775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.658 16:34:38 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.658 16:34:38 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:10:31.658 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2940509 00:10:31.659 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:31.659 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:31.659 [ 00:10:31.659 "bdev_malloc_delete", 00:10:31.659 "bdev_malloc_create", 00:10:31.659 "bdev_null_resize", 00:10:31.659 "bdev_null_delete", 00:10:31.659 "bdev_null_create", 00:10:31.659 "bdev_nvme_cuse_unregister", 00:10:31.659 "bdev_nvme_cuse_register", 00:10:31.659 "bdev_opal_new_user", 00:10:31.659 "bdev_opal_set_lock_state", 00:10:31.659 "bdev_opal_delete", 00:10:31.659 "bdev_opal_get_info", 00:10:31.659 "bdev_opal_create", 00:10:31.659 "bdev_nvme_opal_revert", 00:10:31.659 "bdev_nvme_opal_init", 00:10:31.659 "bdev_nvme_send_cmd", 00:10:31.659 "bdev_nvme_set_keys", 00:10:31.659 "bdev_nvme_get_path_iostat", 00:10:31.659 "bdev_nvme_get_mdns_discovery_info", 00:10:31.659 "bdev_nvme_stop_mdns_discovery", 00:10:31.659 "bdev_nvme_start_mdns_discovery", 00:10:31.659 "bdev_nvme_set_multipath_policy", 00:10:31.659 "bdev_nvme_set_preferred_path", 00:10:31.659 "bdev_nvme_get_io_paths", 00:10:31.659 "bdev_nvme_remove_error_injection", 00:10:31.659 "bdev_nvme_add_error_injection", 00:10:31.659 "bdev_nvme_get_discovery_info", 00:10:31.659 "bdev_nvme_stop_discovery", 00:10:31.659 "bdev_nvme_start_discovery", 00:10:31.659 "bdev_nvme_get_controller_health_info", 00:10:31.659 "bdev_nvme_disable_controller", 00:10:31.659 "bdev_nvme_enable_controller", 00:10:31.659 "bdev_nvme_reset_controller", 00:10:31.659 "bdev_nvme_get_transport_statistics", 00:10:31.659 "bdev_nvme_apply_firmware", 00:10:31.659 "bdev_nvme_detach_controller", 00:10:31.659 "bdev_nvme_get_controllers", 00:10:31.659 "bdev_nvme_attach_controller", 00:10:31.659 "bdev_nvme_set_hotplug", 00:10:31.659 "bdev_nvme_set_options", 00:10:31.659 "bdev_passthru_delete", 00:10:31.659 "bdev_passthru_create", 00:10:31.659 "bdev_lvol_set_parent_bdev", 00:10:31.659 "bdev_lvol_set_parent", 00:10:31.659 "bdev_lvol_check_shallow_copy", 00:10:31.659 "bdev_lvol_start_shallow_copy", 00:10:31.659 "bdev_lvol_grow_lvstore", 00:10:31.659 "bdev_lvol_get_lvols", 00:10:31.659 "bdev_lvol_get_lvstores", 00:10:31.659 "bdev_lvol_delete", 00:10:31.659 "bdev_lvol_set_read_only", 00:10:31.659 "bdev_lvol_resize", 00:10:31.659 "bdev_lvol_decouple_parent", 00:10:31.659 "bdev_lvol_inflate", 00:10:31.659 "bdev_lvol_rename", 00:10:31.659 "bdev_lvol_clone_bdev", 00:10:31.659 "bdev_lvol_clone", 00:10:31.659 "bdev_lvol_snapshot", 00:10:31.659 "bdev_lvol_create", 00:10:31.659 "bdev_lvol_delete_lvstore", 00:10:31.659 "bdev_lvol_rename_lvstore", 00:10:31.659 "bdev_lvol_create_lvstore", 00:10:31.659 "bdev_raid_set_options", 00:10:31.659 "bdev_raid_remove_base_bdev", 00:10:31.659 "bdev_raid_add_base_bdev", 00:10:31.659 "bdev_raid_delete", 00:10:31.659 "bdev_raid_create", 00:10:31.659 "bdev_raid_get_bdevs", 00:10:31.659 "bdev_error_inject_error", 00:10:31.659 "bdev_error_delete", 00:10:31.659 "bdev_error_create", 00:10:31.659 "bdev_split_delete", 00:10:31.659 "bdev_split_create", 00:10:31.659 "bdev_delay_delete", 00:10:31.659 "bdev_delay_create", 00:10:31.659 "bdev_delay_update_latency", 00:10:31.659 "bdev_zone_block_delete", 00:10:31.659 "bdev_zone_block_create", 00:10:31.659 "blobfs_create", 00:10:31.659 "blobfs_detect", 00:10:31.659 "blobfs_set_cache_size", 00:10:31.659 "bdev_aio_delete", 00:10:31.659 "bdev_aio_rescan", 00:10:31.659 "bdev_aio_create", 00:10:31.659 "bdev_ftl_set_property", 00:10:31.659 "bdev_ftl_get_properties", 00:10:31.659 "bdev_ftl_get_stats", 00:10:31.659 "bdev_ftl_unmap", 00:10:31.659 "bdev_ftl_unload", 00:10:31.659 "bdev_ftl_delete", 00:10:31.659 "bdev_ftl_load", 00:10:31.659 "bdev_ftl_create", 00:10:31.659 "bdev_virtio_attach_controller", 00:10:31.659 "bdev_virtio_scsi_get_devices", 00:10:31.659 "bdev_virtio_detach_controller", 00:10:31.659 "bdev_virtio_blk_set_hotplug", 00:10:31.659 "bdev_iscsi_delete", 00:10:31.659 "bdev_iscsi_create", 00:10:31.659 "bdev_iscsi_set_options", 00:10:31.659 "accel_error_inject_error", 00:10:31.659 "ioat_scan_accel_module", 00:10:31.659 "dsa_scan_accel_module", 00:10:31.659 "iaa_scan_accel_module", 00:10:31.659 "vfu_virtio_create_fs_endpoint", 00:10:31.659 "vfu_virtio_create_scsi_endpoint", 00:10:31.659 "vfu_virtio_scsi_remove_target", 00:10:31.659 "vfu_virtio_scsi_add_target", 00:10:31.659 "vfu_virtio_create_blk_endpoint", 00:10:31.659 "vfu_virtio_delete_endpoint", 00:10:31.659 "keyring_file_remove_key", 00:10:31.659 "keyring_file_add_key", 00:10:31.659 "keyring_linux_set_options", 00:10:31.659 "fsdev_aio_delete", 00:10:31.659 "fsdev_aio_create", 00:10:31.659 "iscsi_get_histogram", 00:10:31.659 "iscsi_enable_histogram", 00:10:31.659 "iscsi_set_options", 00:10:31.659 "iscsi_get_auth_groups", 00:10:31.659 "iscsi_auth_group_remove_secret", 00:10:31.659 "iscsi_auth_group_add_secret", 00:10:31.659 "iscsi_delete_auth_group", 00:10:31.659 "iscsi_create_auth_group", 00:10:31.659 "iscsi_set_discovery_auth", 00:10:31.659 "iscsi_get_options", 00:10:31.659 "iscsi_target_node_request_logout", 00:10:31.659 "iscsi_target_node_set_redirect", 00:10:31.659 "iscsi_target_node_set_auth", 00:10:31.659 "iscsi_target_node_add_lun", 00:10:31.659 "iscsi_get_stats", 00:10:31.659 "iscsi_get_connections", 00:10:31.659 "iscsi_portal_group_set_auth", 00:10:31.659 "iscsi_start_portal_group", 00:10:31.659 "iscsi_delete_portal_group", 00:10:31.659 "iscsi_create_portal_group", 00:10:31.659 "iscsi_get_portal_groups", 00:10:31.659 "iscsi_delete_target_node", 00:10:31.659 "iscsi_target_node_remove_pg_ig_maps", 00:10:31.659 "iscsi_target_node_add_pg_ig_maps", 00:10:31.660 "iscsi_create_target_node", 00:10:31.660 "iscsi_get_target_nodes", 00:10:31.660 "iscsi_delete_initiator_group", 00:10:31.660 "iscsi_initiator_group_remove_initiators", 00:10:31.660 "iscsi_initiator_group_add_initiators", 00:10:31.660 "iscsi_create_initiator_group", 00:10:31.660 "iscsi_get_initiator_groups", 00:10:31.660 "nvmf_set_crdt", 00:10:31.660 "nvmf_set_config", 00:10:31.660 "nvmf_set_max_subsystems", 00:10:31.660 "nvmf_stop_mdns_prr", 00:10:31.660 "nvmf_publish_mdns_prr", 00:10:31.660 "nvmf_subsystem_get_listeners", 00:10:31.660 "nvmf_subsystem_get_qpairs", 00:10:31.660 "nvmf_subsystem_get_controllers", 00:10:31.660 "nvmf_get_stats", 00:10:31.660 "nvmf_get_transports", 00:10:31.660 "nvmf_create_transport", 00:10:31.660 "nvmf_get_targets", 00:10:31.660 "nvmf_delete_target", 00:10:31.660 "nvmf_create_target", 00:10:31.660 "nvmf_subsystem_allow_any_host", 00:10:31.660 "nvmf_subsystem_set_keys", 00:10:31.660 "nvmf_subsystem_remove_host", 00:10:31.660 "nvmf_subsystem_add_host", 00:10:31.660 "nvmf_ns_remove_host", 00:10:31.660 "nvmf_ns_add_host", 00:10:31.660 "nvmf_subsystem_remove_ns", 00:10:31.660 "nvmf_subsystem_set_ns_ana_group", 00:10:31.660 "nvmf_subsystem_add_ns", 00:10:31.660 "nvmf_subsystem_listener_set_ana_state", 00:10:31.660 "nvmf_discovery_get_referrals", 00:10:31.660 "nvmf_discovery_remove_referral", 00:10:31.660 "nvmf_discovery_add_referral", 00:10:31.660 "nvmf_subsystem_remove_listener", 00:10:31.660 "nvmf_subsystem_add_listener", 00:10:31.660 "nvmf_delete_subsystem", 00:10:31.660 "nvmf_create_subsystem", 00:10:31.660 "nvmf_get_subsystems", 00:10:31.660 "env_dpdk_get_mem_stats", 00:10:31.660 "nbd_get_disks", 00:10:31.660 "nbd_stop_disk", 00:10:31.660 "nbd_start_disk", 00:10:31.660 "ublk_recover_disk", 00:10:31.660 "ublk_get_disks", 00:10:31.660 "ublk_stop_disk", 00:10:31.660 "ublk_start_disk", 00:10:31.660 "ublk_destroy_target", 00:10:31.660 "ublk_create_target", 00:10:31.660 "virtio_blk_create_transport", 00:10:31.660 "virtio_blk_get_transports", 00:10:31.660 "vhost_controller_set_coalescing", 00:10:31.660 "vhost_get_controllers", 00:10:31.660 "vhost_delete_controller", 00:10:31.660 "vhost_create_blk_controller", 00:10:31.660 "vhost_scsi_controller_remove_target", 00:10:31.660 "vhost_scsi_controller_add_target", 00:10:31.660 "vhost_start_scsi_controller", 00:10:31.660 "vhost_create_scsi_controller", 00:10:31.660 "thread_set_cpumask", 00:10:31.660 "scheduler_set_options", 00:10:31.660 "framework_get_governor", 00:10:31.660 "framework_get_scheduler", 00:10:31.660 "framework_set_scheduler", 00:10:31.660 "framework_get_reactors", 00:10:31.660 "thread_get_io_channels", 00:10:31.660 "thread_get_pollers", 00:10:31.660 "thread_get_stats", 00:10:31.660 "framework_monitor_context_switch", 00:10:31.660 "spdk_kill_instance", 00:10:31.660 "log_enable_timestamps", 00:10:31.660 "log_get_flags", 00:10:31.660 "log_clear_flag", 00:10:31.660 "log_set_flag", 00:10:31.660 "log_get_level", 00:10:31.660 "log_set_level", 00:10:31.660 "log_get_print_level", 00:10:31.660 "log_set_print_level", 00:10:31.660 "framework_enable_cpumask_locks", 00:10:31.660 "framework_disable_cpumask_locks", 00:10:31.660 "framework_wait_init", 00:10:31.660 "framework_start_init", 00:10:31.660 "scsi_get_devices", 00:10:31.660 "bdev_get_histogram", 00:10:31.660 "bdev_enable_histogram", 00:10:31.660 "bdev_set_qos_limit", 00:10:31.660 "bdev_set_qd_sampling_period", 00:10:31.660 "bdev_get_bdevs", 00:10:31.660 "bdev_reset_iostat", 00:10:31.660 "bdev_get_iostat", 00:10:31.660 "bdev_examine", 00:10:31.660 "bdev_wait_for_examine", 00:10:31.660 "bdev_set_options", 00:10:31.660 "accel_get_stats", 00:10:31.660 "accel_set_options", 00:10:31.660 "accel_set_driver", 00:10:31.660 "accel_crypto_key_destroy", 00:10:31.660 "accel_crypto_keys_get", 00:10:31.660 "accel_crypto_key_create", 00:10:31.660 "accel_assign_opc", 00:10:31.660 "accel_get_module_info", 00:10:31.660 "accel_get_opc_assignments", 00:10:31.660 "vmd_rescan", 00:10:31.660 "vmd_remove_device", 00:10:31.660 "vmd_enable", 00:10:31.660 "sock_get_default_impl", 00:10:31.660 "sock_set_default_impl", 00:10:31.660 "sock_impl_set_options", 00:10:31.660 "sock_impl_get_options", 00:10:31.660 "iobuf_get_stats", 00:10:31.660 "iobuf_set_options", 00:10:31.660 "keyring_get_keys", 00:10:31.660 "vfu_tgt_set_base_path", 00:10:31.660 "framework_get_pci_devices", 00:10:31.660 "framework_get_config", 00:10:31.660 "framework_get_subsystems", 00:10:31.660 "fsdev_set_opts", 00:10:31.660 "fsdev_get_opts", 00:10:31.660 "trace_get_info", 00:10:31.660 "trace_get_tpoint_group_mask", 00:10:31.660 "trace_disable_tpoint_group", 00:10:31.660 "trace_enable_tpoint_group", 00:10:31.660 "trace_clear_tpoint_mask", 00:10:31.660 "trace_set_tpoint_mask", 00:10:31.660 "notify_get_notifications", 00:10:31.660 "notify_get_types", 00:10:31.660 "spdk_get_version", 00:10:31.660 "rpc_get_methods" 00:10:31.660 ] 00:10:31.660 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:31.660 16:34:38 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.660 16:34:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:31.660 16:34:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2940327 00:10:31.660 16:34:38 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2940327 ']' 00:10:31.660 16:34:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2940327 00:10:31.660 16:34:38 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:10:31.660 16:34:38 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.922 16:34:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2940327 00:10:31.922 16:34:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:31.922 16:34:38 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:31.922 16:34:38 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2940327' 00:10:31.922 killing process with pid 2940327 00:10:31.922 16:34:38 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2940327 00:10:31.922 16:34:38 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2940327 00:10:32.182 00:10:32.182 real 0m1.041s 00:10:32.183 user 0m1.757s 00:10:32.183 sys 0m0.416s 00:10:32.183 16:34:38 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:32.183 16:34:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.183 ************************************ 00:10:32.183 END TEST spdkcli_tcp 00:10:32.183 ************************************ 00:10:32.183 16:34:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:32.183 16:34:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:32.183 16:34:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:32.183 16:34:39 -- common/autotest_common.sh@10 -- # set +x 00:10:32.183 ************************************ 00:10:32.183 START TEST dpdk_mem_utility 00:10:32.183 ************************************ 00:10:32.183 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:32.183 * Looking for test storage... 00:10:32.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:10:32.183 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:32.183 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:10:32.183 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:32.183 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.183 16:34:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.443 16:34:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:32.443 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.443 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.443 --rc genhtml_branch_coverage=1 00:10:32.443 --rc genhtml_function_coverage=1 00:10:32.443 --rc genhtml_legend=1 00:10:32.443 --rc geninfo_all_blocks=1 00:10:32.443 --rc geninfo_unexecuted_blocks=1 00:10:32.443 00:10:32.443 ' 00:10:32.443 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.443 --rc genhtml_branch_coverage=1 00:10:32.443 --rc genhtml_function_coverage=1 00:10:32.443 --rc genhtml_legend=1 00:10:32.443 --rc geninfo_all_blocks=1 00:10:32.443 --rc geninfo_unexecuted_blocks=1 00:10:32.443 00:10:32.443 ' 00:10:32.443 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.443 --rc genhtml_branch_coverage=1 00:10:32.443 --rc genhtml_function_coverage=1 00:10:32.443 --rc genhtml_legend=1 00:10:32.443 --rc geninfo_all_blocks=1 00:10:32.443 --rc geninfo_unexecuted_blocks=1 00:10:32.443 00:10:32.443 ' 00:10:32.443 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.443 --rc genhtml_branch_coverage=1 00:10:32.443 --rc genhtml_function_coverage=1 00:10:32.443 --rc genhtml_legend=1 00:10:32.443 --rc geninfo_all_blocks=1 00:10:32.444 --rc geninfo_unexecuted_blocks=1 00:10:32.444 00:10:32.444 ' 00:10:32.444 16:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:32.444 16:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2940628 00:10:32.444 16:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2940628 00:10:32.444 16:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:32.444 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2940628 ']' 00:10:32.444 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.444 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:32.444 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.444 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:32.444 16:34:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:32.444 [2024-11-05 16:34:39.322169] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:32.444 [2024-11-05 16:34:39.322224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940628 ] 00:10:32.444 [2024-11-05 16:34:39.394814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.444 [2024-11-05 16:34:39.432006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.387 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.387 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:10:33.387 16:34:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:33.387 16:34:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:33.387 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.387 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:33.387 { 00:10:33.387 "filename": "/tmp/spdk_mem_dump.txt" 00:10:33.387 } 00:10:33.387 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.387 16:34:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:33.387 DPDK memory size 810.000000 MiB in 1 heap(s) 00:10:33.387 1 heaps totaling size 810.000000 MiB 00:10:33.387 size: 810.000000 MiB heap id: 0 00:10:33.387 end heaps---------- 00:10:33.387 9 mempools totaling size 595.772034 MiB 00:10:33.387 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:33.387 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:33.387 size: 92.545471 MiB name: bdev_io_2940628 00:10:33.387 size: 50.003479 MiB name: msgpool_2940628 00:10:33.387 size: 36.509338 MiB name: fsdev_io_2940628 00:10:33.387 size: 21.763794 MiB name: PDU_Pool 00:10:33.387 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:33.387 size: 4.133484 MiB name: evtpool_2940628 00:10:33.387 size: 0.026123 MiB name: Session_Pool 00:10:33.387 end mempools------- 00:10:33.387 6 memzones totaling size 4.142822 MiB 00:10:33.387 size: 1.000366 MiB name: RG_ring_0_2940628 00:10:33.387 size: 1.000366 MiB name: RG_ring_1_2940628 00:10:33.387 size: 1.000366 MiB name: RG_ring_4_2940628 00:10:33.387 size: 1.000366 MiB name: RG_ring_5_2940628 00:10:33.387 size: 0.125366 MiB name: RG_ring_2_2940628 00:10:33.387 size: 0.015991 MiB name: RG_ring_3_2940628 00:10:33.387 end memzones------- 00:10:33.387 16:34:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:33.387 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:33.387 list of free elements. size: 10.862488 MiB 00:10:33.387 element at address: 0x200018a00000 with size: 0.999878 MiB 00:10:33.387 element at address: 0x200018c00000 with size: 0.999878 MiB 00:10:33.387 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:33.387 element at address: 0x200031800000 with size: 0.994446 MiB 00:10:33.387 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:33.387 element at address: 0x200012c00000 with size: 0.954285 MiB 00:10:33.387 element at address: 0x200018e00000 with size: 0.936584 MiB 00:10:33.387 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:33.387 element at address: 0x20001a600000 with size: 0.582886 MiB 00:10:33.387 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:33.387 element at address: 0x20000a600000 with size: 0.490723 MiB 00:10:33.387 element at address: 0x200019000000 with size: 0.485657 MiB 00:10:33.387 element at address: 0x200003e00000 with size: 0.481934 MiB 00:10:33.387 element at address: 0x200027a00000 with size: 0.410034 MiB 00:10:33.387 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:33.387 list of standard malloc elements. size: 199.218628 MiB 00:10:33.387 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:33.387 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:33.387 element at address: 0x200018afff80 with size: 1.000122 MiB 00:10:33.387 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:10:33.387 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:33.387 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:33.387 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:10:33.387 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:33.387 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:10:33.387 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000085f300 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:10:33.387 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20001a695380 with size: 0.000183 MiB 00:10:33.387 element at address: 0x20001a695440 with size: 0.000183 MiB 00:10:33.387 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:10:33.388 element at address: 0x200027a69040 with size: 0.000183 MiB 00:10:33.388 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:10:33.388 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:10:33.388 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:10:33.388 list of memzone associated elements. size: 599.918884 MiB 00:10:33.388 element at address: 0x20001a695500 with size: 211.416748 MiB 00:10:33.388 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:33.388 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:10:33.388 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:33.388 element at address: 0x200012df4780 with size: 92.045044 MiB 00:10:33.388 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2940628_0 00:10:33.388 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:33.388 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2940628_0 00:10:33.388 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:33.388 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2940628_0 00:10:33.388 element at address: 0x2000191be940 with size: 20.255554 MiB 00:10:33.388 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:33.388 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:10:33.388 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:33.388 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:33.388 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2940628_0 00:10:33.388 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:33.388 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2940628 00:10:33.388 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:33.388 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2940628 00:10:33.388 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:33.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:33.388 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:10:33.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:33.388 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:33.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:33.388 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:33.388 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:33.388 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:33.388 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2940628 00:10:33.388 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:33.388 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2940628 00:10:33.388 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:10:33.388 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2940628 00:10:33.388 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:10:33.388 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2940628 00:10:33.388 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:33.388 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2940628 00:10:33.388 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:33.388 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2940628 00:10:33.388 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:33.388 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:33.388 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:33.388 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:33.388 element at address: 0x20001907c540 with size: 0.250488 MiB 00:10:33.388 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:33.388 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:33.388 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2940628 00:10:33.388 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:10:33.388 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2940628 00:10:33.388 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:33.388 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:33.388 element at address: 0x200027a69100 with size: 0.023743 MiB 00:10:33.388 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:33.388 element at address: 0x20000085b100 with size: 0.016113 MiB 00:10:33.388 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2940628 00:10:33.388 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:10:33.388 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:33.388 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:33.388 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2940628 00:10:33.388 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:33.388 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2940628 00:10:33.388 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:33.388 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2940628 00:10:33.388 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:10:33.388 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:33.388 16:34:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:33.388 16:34:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2940628 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2940628 ']' 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2940628 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2940628 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2940628' 00:10:33.388 killing process with pid 2940628 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2940628 00:10:33.388 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2940628 00:10:33.648 00:10:33.648 real 0m1.430s 00:10:33.648 user 0m1.532s 00:10:33.648 sys 0m0.400s 00:10:33.648 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.648 16:34:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:33.648 ************************************ 00:10:33.648 END TEST dpdk_mem_utility 00:10:33.648 ************************************ 00:10:33.649 16:34:40 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:33.649 16:34:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:33.649 16:34:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.649 16:34:40 -- common/autotest_common.sh@10 -- # set +x 00:10:33.649 ************************************ 00:10:33.649 START TEST event 00:10:33.649 ************************************ 00:10:33.649 16:34:40 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:33.649 * Looking for test storage... 00:10:33.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:33.649 16:34:40 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.649 16:34:40 event -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.649 16:34:40 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.910 16:34:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.910 16:34:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.910 16:34:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.910 16:34:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.910 16:34:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.910 16:34:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.910 16:34:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.910 16:34:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.910 16:34:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.910 16:34:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.910 16:34:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.910 16:34:40 event -- scripts/common.sh@344 -- # case "$op" in 00:10:33.910 16:34:40 event -- scripts/common.sh@345 -- # : 1 00:10:33.910 16:34:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.910 16:34:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.910 16:34:40 event -- scripts/common.sh@365 -- # decimal 1 00:10:33.910 16:34:40 event -- scripts/common.sh@353 -- # local d=1 00:10:33.910 16:34:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.910 16:34:40 event -- scripts/common.sh@355 -- # echo 1 00:10:33.910 16:34:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.910 16:34:40 event -- scripts/common.sh@366 -- # decimal 2 00:10:33.910 16:34:40 event -- scripts/common.sh@353 -- # local d=2 00:10:33.910 16:34:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.910 16:34:40 event -- scripts/common.sh@355 -- # echo 2 00:10:33.910 16:34:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.910 16:34:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.910 16:34:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.910 16:34:40 event -- scripts/common.sh@368 -- # return 0 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.910 --rc genhtml_branch_coverage=1 00:10:33.910 --rc genhtml_function_coverage=1 00:10:33.910 --rc genhtml_legend=1 00:10:33.910 --rc geninfo_all_blocks=1 00:10:33.910 --rc geninfo_unexecuted_blocks=1 00:10:33.910 00:10:33.910 ' 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.910 --rc genhtml_branch_coverage=1 00:10:33.910 --rc genhtml_function_coverage=1 00:10:33.910 --rc genhtml_legend=1 00:10:33.910 --rc geninfo_all_blocks=1 00:10:33.910 --rc geninfo_unexecuted_blocks=1 00:10:33.910 00:10:33.910 ' 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.910 --rc genhtml_branch_coverage=1 00:10:33.910 --rc genhtml_function_coverage=1 00:10:33.910 --rc genhtml_legend=1 00:10:33.910 --rc geninfo_all_blocks=1 00:10:33.910 --rc geninfo_unexecuted_blocks=1 00:10:33.910 00:10:33.910 ' 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.910 --rc genhtml_branch_coverage=1 00:10:33.910 --rc genhtml_function_coverage=1 00:10:33.910 --rc genhtml_legend=1 00:10:33.910 --rc geninfo_all_blocks=1 00:10:33.910 --rc geninfo_unexecuted_blocks=1 00:10:33.910 00:10:33.910 ' 00:10:33.910 16:34:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:33.910 16:34:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:33.910 16:34:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:33.910 16:34:40 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.910 16:34:40 event -- common/autotest_common.sh@10 -- # set +x 00:10:33.910 ************************************ 00:10:33.910 START TEST event_perf 00:10:33.910 ************************************ 00:10:33.910 16:34:40 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:33.910 Running I/O for 1 seconds...[2024-11-05 16:34:40.806295] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:33.910 [2024-11-05 16:34:40.806398] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941022 ] 00:10:33.910 [2024-11-05 16:34:40.893434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.910 [2024-11-05 16:34:40.932980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.910 [2024-11-05 16:34:40.933091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.910 [2024-11-05 16:34:40.933275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.910 Running I/O for 1 seconds...[2024-11-05 16:34:40.933276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.294 00:10:35.294 lcore 0: 176101 00:10:35.294 lcore 1: 176100 00:10:35.294 lcore 2: 176098 00:10:35.294 lcore 3: 176101 00:10:35.294 done. 00:10:35.294 00:10:35.294 real 0m1.183s 00:10:35.294 user 0m4.101s 00:10:35.294 sys 0m0.078s 00:10:35.294 16:34:41 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.294 16:34:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:35.294 ************************************ 00:10:35.294 END TEST event_perf 00:10:35.294 ************************************ 00:10:35.294 16:34:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:35.294 16:34:42 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:35.294 16:34:42 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.294 16:34:42 event -- common/autotest_common.sh@10 -- # set +x 00:10:35.294 ************************************ 00:10:35.294 START TEST event_reactor 00:10:35.294 ************************************ 00:10:35.294 16:34:42 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:35.294 [2024-11-05 16:34:42.068621] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:35.294 [2024-11-05 16:34:42.068723] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941385 ] 00:10:35.294 [2024-11-05 16:34:42.143370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.294 [2024-11-05 16:34:42.177319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.235 test_start 00:10:36.235 oneshot 00:10:36.235 tick 100 00:10:36.235 tick 100 00:10:36.235 tick 250 00:10:36.235 tick 100 00:10:36.235 tick 100 00:10:36.235 tick 250 00:10:36.235 tick 100 00:10:36.236 tick 500 00:10:36.236 tick 100 00:10:36.236 tick 100 00:10:36.236 tick 250 00:10:36.236 tick 100 00:10:36.236 tick 100 00:10:36.236 test_end 00:10:36.236 00:10:36.236 real 0m1.162s 00:10:36.236 user 0m1.104s 00:10:36.236 sys 0m0.055s 00:10:36.236 16:34:43 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.236 16:34:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:36.236 ************************************ 00:10:36.236 END TEST event_reactor 00:10:36.236 ************************************ 00:10:36.236 16:34:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:36.236 16:34:43 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:36.236 16:34:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.236 16:34:43 event -- common/autotest_common.sh@10 -- # set +x 00:10:36.236 ************************************ 00:10:36.236 START TEST event_reactor_perf 00:10:36.236 ************************************ 00:10:36.236 16:34:43 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:36.496 [2024-11-05 16:34:43.301593] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:36.496 [2024-11-05 16:34:43.301684] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941584 ] 00:10:36.496 [2024-11-05 16:34:43.376896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.496 [2024-11-05 16:34:43.414245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.437 test_start 00:10:37.437 test_end 00:10:37.437 Performance: 371322 events per second 00:10:37.437 00:10:37.438 real 0m1.165s 00:10:37.438 user 0m1.102s 00:10:37.438 sys 0m0.060s 00:10:37.438 16:34:44 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:37.438 16:34:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:37.438 ************************************ 00:10:37.438 END TEST event_reactor_perf 00:10:37.438 ************************************ 00:10:37.438 16:34:44 event -- event/event.sh@49 -- # uname -s 00:10:37.438 16:34:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:37.438 16:34:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:37.438 16:34:44 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:37.438 16:34:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.438 16:34:44 event -- common/autotest_common.sh@10 -- # set +x 00:10:37.701 ************************************ 00:10:37.701 START TEST event_scheduler 00:10:37.701 ************************************ 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:37.701 * Looking for test storage... 00:10:37.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.701 16:34:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.701 --rc genhtml_branch_coverage=1 00:10:37.701 --rc genhtml_function_coverage=1 00:10:37.701 --rc genhtml_legend=1 00:10:37.701 --rc geninfo_all_blocks=1 00:10:37.701 --rc geninfo_unexecuted_blocks=1 00:10:37.701 00:10:37.701 ' 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.701 --rc genhtml_branch_coverage=1 00:10:37.701 --rc genhtml_function_coverage=1 00:10:37.701 --rc genhtml_legend=1 00:10:37.701 --rc geninfo_all_blocks=1 00:10:37.701 --rc geninfo_unexecuted_blocks=1 00:10:37.701 00:10:37.701 ' 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.701 --rc genhtml_branch_coverage=1 00:10:37.701 --rc genhtml_function_coverage=1 00:10:37.701 --rc genhtml_legend=1 00:10:37.701 --rc geninfo_all_blocks=1 00:10:37.701 --rc geninfo_unexecuted_blocks=1 00:10:37.701 00:10:37.701 ' 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.701 --rc genhtml_branch_coverage=1 00:10:37.701 --rc genhtml_function_coverage=1 00:10:37.701 --rc genhtml_legend=1 00:10:37.701 --rc geninfo_all_blocks=1 00:10:37.701 --rc geninfo_unexecuted_blocks=1 00:10:37.701 00:10:37.701 ' 00:10:37.701 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:37.701 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2941826 00:10:37.701 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:37.701 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2941826 00:10:37.701 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2941826 ']' 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:37.701 16:34:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:37.701 [2024-11-05 16:34:44.752308] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:37.701 [2024-11-05 16:34:44.752378] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941826 ] 00:10:37.962 [2024-11-05 16:34:44.817118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.962 [2024-11-05 16:34:44.857303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.962 [2024-11-05 16:34:44.857463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.962 [2024-11-05 16:34:44.857616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.962 [2024-11-05 16:34:44.857617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:10:37.962 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:37.962 [2024-11-05 16:34:44.906097] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:37.962 [2024-11-05 16:34:44.906112] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:37.962 [2024-11-05 16:34:44.906119] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:37.962 [2024-11-05 16:34:44.906124] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:37.962 [2024-11-05 16:34:44.906128] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.962 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:37.962 [2024-11-05 16:34:44.966338] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.962 16:34:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.962 16:34:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:37.962 ************************************ 00:10:37.962 START TEST scheduler_create_thread 00:10:37.962 ************************************ 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:37.962 2 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.962 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 3 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 4 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 5 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 6 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 7 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 8 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 9 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.224 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:38.484 10 00:10:38.484 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.484 16:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:38.484 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.745 16:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 16:34:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 16:34:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:40.141 16:34:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:40.141 16:34:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 16:34:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:40.710 16:34:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.710 16:34:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:40.710 16:34:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.710 16:34:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.651 16:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.651 16:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:41.651 16:34:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:41.651 16:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.651 16:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.222 16:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.222 00:10:42.222 real 0m4.224s 00:10:42.222 user 0m0.027s 00:10:42.222 sys 0m0.005s 00:10:42.222 16:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.222 16:34:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.222 ************************************ 00:10:42.222 END TEST scheduler_create_thread 00:10:42.222 ************************************ 00:10:42.222 16:34:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:42.222 16:34:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2941826 00:10:42.222 16:34:49 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2941826 ']' 00:10:42.222 16:34:49 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2941826 00:10:42.222 16:34:49 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:10:42.222 16:34:49 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:42.222 16:34:49 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2941826 00:10:42.483 16:34:49 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:10:42.483 16:34:49 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:10:42.483 16:34:49 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2941826' 00:10:42.483 killing process with pid 2941826 00:10:42.483 16:34:49 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2941826 00:10:42.483 16:34:49 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2941826 00:10:42.483 [2024-11-05 16:34:49.507501] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:42.749 00:10:42.749 real 0m5.143s 00:10:42.749 user 0m10.227s 00:10:42.749 sys 0m0.359s 00:10:42.749 16:34:49 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.749 16:34:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 ************************************ 00:10:42.749 END TEST event_scheduler 00:10:42.749 ************************************ 00:10:42.749 16:34:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:42.749 16:34:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:42.749 16:34:49 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.749 16:34:49 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.749 16:34:49 event -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 ************************************ 00:10:42.749 START TEST app_repeat 00:10:42.749 ************************************ 00:10:42.749 16:34:49 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:10:42.749 16:34:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.749 16:34:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.749 16:34:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:42.749 16:34:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2942887 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2942887' 00:10:42.750 Process app_repeat pid: 2942887 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:42.750 spdk_app_start Round 0 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2942887 /var/tmp/spdk-nbd.sock 00:10:42.750 16:34:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2942887 ']' 00:10:42.750 16:34:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:42.750 16:34:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.750 16:34:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:42.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:42.750 16:34:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.750 16:34:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:42.750 16:34:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:42.750 [2024-11-05 16:34:49.778033] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:42.750 [2024-11-05 16:34:49.778102] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942887 ] 00:10:43.015 [2024-11-05 16:34:49.852408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.015 [2024-11-05 16:34:49.890859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.015 [2024-11-05 16:34:49.890862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.015 16:34:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:43.015 16:34:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:43.015 16:34:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:43.276 Malloc0 00:10:43.276 16:34:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:43.276 Malloc1 00:10:43.276 16:34:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.276 16:34:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:43.536 /dev/nbd0 00:10:43.536 16:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:43.536 16:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.536 1+0 records in 00:10:43.536 1+0 records out 00:10:43.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209501 s, 19.6 MB/s 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:43.536 16:34:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:43.536 16:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.536 16:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.536 16:34:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:43.797 /dev/nbd1 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.797 1+0 records in 00:10:43.797 1+0 records out 00:10:43.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209849 s, 19.5 MB/s 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:43.797 16:34:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.797 16:34:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:44.059 { 00:10:44.059 "nbd_device": "/dev/nbd0", 00:10:44.059 "bdev_name": "Malloc0" 00:10:44.059 }, 00:10:44.059 { 00:10:44.059 "nbd_device": "/dev/nbd1", 00:10:44.059 "bdev_name": "Malloc1" 00:10:44.059 } 00:10:44.059 ]' 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:44.059 { 00:10:44.059 "nbd_device": "/dev/nbd0", 00:10:44.059 "bdev_name": "Malloc0" 00:10:44.059 }, 00:10:44.059 { 00:10:44.059 "nbd_device": "/dev/nbd1", 00:10:44.059 "bdev_name": "Malloc1" 00:10:44.059 } 00:10:44.059 ]' 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:44.059 /dev/nbd1' 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:44.059 /dev/nbd1' 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:44.059 16:34:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:44.059 256+0 records in 00:10:44.059 256+0 records out 00:10:44.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126483 s, 82.9 MB/s 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:44.059 256+0 records in 00:10:44.059 256+0 records out 00:10:44.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179217 s, 58.5 MB/s 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:44.059 256+0 records in 00:10:44.059 256+0 records out 00:10:44.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192076 s, 54.6 MB/s 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.059 16:34:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.320 16:34:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:44.582 16:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:44.843 16:34:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:44.843 16:34:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:44.843 16:34:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:45.167 [2024-11-05 16:34:51.975790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.167 [2024-11-05 16:34:52.011533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.167 [2024-11-05 16:34:52.011535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.167 [2024-11-05 16:34:52.043011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:45.167 [2024-11-05 16:34:52.043046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:48.544 16:34:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:48.544 16:34:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:48.544 spdk_app_start Round 1 00:10:48.544 16:34:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2942887 /var/tmp/spdk-nbd.sock 00:10:48.544 16:34:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2942887 ']' 00:10:48.544 16:34:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:48.544 16:34:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.544 16:34:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:48.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:48.544 16:34:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.544 16:34:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:48.544 16:34:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:48.544 Malloc0 00:10:48.544 16:34:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:48.544 Malloc1 00:10:48.544 16:34:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:48.544 /dev/nbd0 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:48.544 16:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:48.544 16:34:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:48.805 1+0 records in 00:10:48.805 1+0 records out 00:10:48.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002468 s, 16.6 MB/s 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:48.805 /dev/nbd1 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:48.805 1+0 records in 00:10:48.805 1+0 records out 00:10:48.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239758 s, 17.1 MB/s 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:48.805 16:34:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.805 16:34:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:49.066 { 00:10:49.066 "nbd_device": "/dev/nbd0", 00:10:49.066 "bdev_name": "Malloc0" 00:10:49.066 }, 00:10:49.066 { 00:10:49.066 "nbd_device": "/dev/nbd1", 00:10:49.066 "bdev_name": "Malloc1" 00:10:49.066 } 00:10:49.066 ]' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:49.066 { 00:10:49.066 "nbd_device": "/dev/nbd0", 00:10:49.066 "bdev_name": "Malloc0" 00:10:49.066 }, 00:10:49.066 { 00:10:49.066 "nbd_device": "/dev/nbd1", 00:10:49.066 "bdev_name": "Malloc1" 00:10:49.066 } 00:10:49.066 ]' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:49.066 /dev/nbd1' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:49.066 /dev/nbd1' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:49.066 256+0 records in 00:10:49.066 256+0 records out 00:10:49.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127512 s, 82.2 MB/s 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:49.066 256+0 records in 00:10:49.066 256+0 records out 00:10:49.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165017 s, 63.5 MB/s 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:49.066 256+0 records in 00:10:49.066 256+0 records out 00:10:49.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196193 s, 53.4 MB/s 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:49.066 16:34:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.327 16:34:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.588 16:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:49.849 16:34:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:49.849 16:34:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:50.110 16:34:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:50.110 [2024-11-05 16:34:57.042683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.110 [2024-11-05 16:34:57.078238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.110 [2024-11-05 16:34:57.078240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.110 [2024-11-05 16:34:57.110524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:50.110 [2024-11-05 16:34:57.110559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:53.412 16:34:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:53.412 16:34:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:53.412 spdk_app_start Round 2 00:10:53.412 16:34:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2942887 /var/tmp/spdk-nbd.sock 00:10:53.412 16:34:59 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2942887 ']' 00:10:53.412 16:34:59 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:53.412 16:34:59 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:53.412 16:34:59 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:53.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:53.412 16:34:59 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:53.412 16:34:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:53.412 16:35:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:53.412 16:35:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:53.412 16:35:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:53.412 Malloc0 00:10:53.412 16:35:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:53.412 Malloc1 00:10:53.412 16:35:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.412 16:35:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:53.674 /dev/nbd0 00:10:53.674 16:35:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:53.674 16:35:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:53.674 1+0 records in 00:10:53.674 1+0 records out 00:10:53.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297826 s, 13.8 MB/s 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:53.674 16:35:00 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:53.674 16:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.674 16:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.674 16:35:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:53.935 /dev/nbd1 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:53.935 1+0 records in 00:10:53.935 1+0 records out 00:10:53.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023264 s, 17.6 MB/s 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:53.935 16:35:00 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.935 16:35:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:54.196 { 00:10:54.196 "nbd_device": "/dev/nbd0", 00:10:54.196 "bdev_name": "Malloc0" 00:10:54.196 }, 00:10:54.196 { 00:10:54.196 "nbd_device": "/dev/nbd1", 00:10:54.196 "bdev_name": "Malloc1" 00:10:54.196 } 00:10:54.196 ]' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:54.196 { 00:10:54.196 "nbd_device": "/dev/nbd0", 00:10:54.196 "bdev_name": "Malloc0" 00:10:54.196 }, 00:10:54.196 { 00:10:54.196 "nbd_device": "/dev/nbd1", 00:10:54.196 "bdev_name": "Malloc1" 00:10:54.196 } 00:10:54.196 ]' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:54.196 /dev/nbd1' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:54.196 /dev/nbd1' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:54.196 256+0 records in 00:10:54.196 256+0 records out 00:10:54.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116756 s, 89.8 MB/s 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:54.196 256+0 records in 00:10:54.196 256+0 records out 00:10:54.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166839 s, 62.8 MB/s 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:54.196 256+0 records in 00:10:54.196 256+0 records out 00:10:54.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177636 s, 59.0 MB/s 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:54.196 16:35:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.197 16:35:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:54.457 16:35:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.458 16:35:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:54.719 16:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:54.979 16:35:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:54.980 16:35:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:54.980 16:35:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:54.980 16:35:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:55.240 [2024-11-05 16:35:02.107808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.240 [2024-11-05 16:35:02.143304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.240 [2024-11-05 16:35:02.143306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.240 [2024-11-05 16:35:02.174939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:55.240 [2024-11-05 16:35:02.174974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:58.541 16:35:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2942887 /var/tmp/spdk-nbd.sock 00:10:58.541 16:35:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2942887 ']' 00:10:58.541 16:35:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:58.541 16:35:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.541 16:35:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:58.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:58.541 16:35:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.541 16:35:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:58.541 16:35:05 event.app_repeat -- event/event.sh@39 -- # killprocess 2942887 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2942887 ']' 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2942887 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2942887 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2942887' 00:10:58.541 killing process with pid 2942887 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2942887 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2942887 00:10:58.541 spdk_app_start is called in Round 0. 00:10:58.541 Shutdown signal received, stop current app iteration 00:10:58.541 Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 reinitialization... 00:10:58.541 spdk_app_start is called in Round 1. 00:10:58.541 Shutdown signal received, stop current app iteration 00:10:58.541 Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 reinitialization... 00:10:58.541 spdk_app_start is called in Round 2. 00:10:58.541 Shutdown signal received, stop current app iteration 00:10:58.541 Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 reinitialization... 00:10:58.541 spdk_app_start is called in Round 3. 00:10:58.541 Shutdown signal received, stop current app iteration 00:10:58.541 16:35:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:58.541 16:35:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:58.541 00:10:58.541 real 0m15.579s 00:10:58.541 user 0m33.894s 00:10:58.541 sys 0m2.283s 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.541 16:35:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:58.541 ************************************ 00:10:58.541 END TEST app_repeat 00:10:58.541 ************************************ 00:10:58.541 16:35:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:58.541 16:35:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:58.541 16:35:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:58.541 16:35:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.541 16:35:05 event -- common/autotest_common.sh@10 -- # set +x 00:10:58.541 ************************************ 00:10:58.541 START TEST cpu_locks 00:10:58.541 ************************************ 00:10:58.541 16:35:05 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:58.541 * Looking for test storage... 00:10:58.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:58.541 16:35:05 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.542 16:35:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:58.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.542 --rc genhtml_branch_coverage=1 00:10:58.542 --rc genhtml_function_coverage=1 00:10:58.542 --rc genhtml_legend=1 00:10:58.542 --rc geninfo_all_blocks=1 00:10:58.542 --rc geninfo_unexecuted_blocks=1 00:10:58.542 00:10:58.542 ' 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:58.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.542 --rc genhtml_branch_coverage=1 00:10:58.542 --rc genhtml_function_coverage=1 00:10:58.542 --rc genhtml_legend=1 00:10:58.542 --rc geninfo_all_blocks=1 00:10:58.542 --rc geninfo_unexecuted_blocks=1 00:10:58.542 00:10:58.542 ' 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:58.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.542 --rc genhtml_branch_coverage=1 00:10:58.542 --rc genhtml_function_coverage=1 00:10:58.542 --rc genhtml_legend=1 00:10:58.542 --rc geninfo_all_blocks=1 00:10:58.542 --rc geninfo_unexecuted_blocks=1 00:10:58.542 00:10:58.542 ' 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:58.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.542 --rc genhtml_branch_coverage=1 00:10:58.542 --rc genhtml_function_coverage=1 00:10:58.542 --rc genhtml_legend=1 00:10:58.542 --rc geninfo_all_blocks=1 00:10:58.542 --rc geninfo_unexecuted_blocks=1 00:10:58.542 00:10:58.542 ' 00:10:58.542 16:35:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:58.542 16:35:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:58.542 16:35:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:58.542 16:35:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.542 16:35:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.804 ************************************ 00:10:58.804 START TEST default_locks 00:10:58.804 ************************************ 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2946456 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2946456 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2946456 ']' 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.804 16:35:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.804 [2024-11-05 16:35:05.689693] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:10:58.804 [2024-11-05 16:35:05.689743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946456 ] 00:10:58.804 [2024-11-05 16:35:05.759892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.804 [2024-11-05 16:35:05.796208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.748 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.748 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:10:59.748 16:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2946456 00:10:59.748 16:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2946456 00:10:59.748 16:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:00.009 lslocks: write error 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2946456 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2946456 ']' 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2946456 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2946456 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2946456' 00:11:00.009 killing process with pid 2946456 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2946456 00:11:00.009 16:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2946456 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2946456 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2946456 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2946456 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2946456 ']' 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2946456) - No such process 00:11:00.270 ERROR: process (pid: 2946456) is no longer running 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:00.270 00:11:00.270 real 0m1.487s 00:11:00.270 user 0m1.619s 00:11:00.270 sys 0m0.490s 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:00.270 16:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.270 ************************************ 00:11:00.270 END TEST default_locks 00:11:00.270 ************************************ 00:11:00.270 16:35:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:00.270 16:35:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:00.270 16:35:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.270 16:35:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.270 ************************************ 00:11:00.270 START TEST default_locks_via_rpc 00:11:00.270 ************************************ 00:11:00.270 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:11:00.270 16:35:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2946824 00:11:00.270 16:35:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2946824 00:11:00.270 16:35:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:00.270 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2946824 ']' 00:11:00.270 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.271 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.271 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.271 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.271 16:35:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.271 [2024-11-05 16:35:07.248300] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:00.271 [2024-11-05 16:35:07.248348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946824 ] 00:11:00.271 [2024-11-05 16:35:07.319287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.531 [2024-11-05 16:35:07.353846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.101 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.101 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:01.101 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2946824 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2946824 00:11:01.102 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2946824 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2946824 ']' 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2946824 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2946824 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2946824' 00:11:01.672 killing process with pid 2946824 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2946824 00:11:01.672 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2946824 00:11:01.933 00:11:01.933 real 0m1.602s 00:11:01.933 user 0m1.731s 00:11:01.933 sys 0m0.540s 00:11:01.933 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.933 16:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.933 ************************************ 00:11:01.933 END TEST default_locks_via_rpc 00:11:01.933 ************************************ 00:11:01.933 16:35:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:01.933 16:35:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:01.933 16:35:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.933 16:35:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.933 ************************************ 00:11:01.933 START TEST non_locking_app_on_locked_coremask 00:11:01.933 ************************************ 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2947190 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2947190 /var/tmp/spdk.sock 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2947190 ']' 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:01.933 16:35:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:01.933 [2024-11-05 16:35:08.921213] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:01.933 [2024-11-05 16:35:08.921254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947190 ] 00:11:01.933 [2024-11-05 16:35:08.983551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.194 [2024-11-05 16:35:09.018951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2947196 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2947196 /var/tmp/spdk2.sock 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2947196 ']' 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:02.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:02.194 16:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:02.454 [2024-11-05 16:35:09.262946] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:02.454 [2024-11-05 16:35:09.263001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947196 ] 00:11:02.454 [2024-11-05 16:35:09.374538] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:02.454 [2024-11-05 16:35:09.374565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.454 [2024-11-05 16:35:09.442626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.034 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:03.034 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:03.034 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2947190 00:11:03.034 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2947190 00:11:03.034 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:03.605 lslocks: write error 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2947190 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2947190 ']' 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2947190 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2947190 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2947190' 00:11:03.605 killing process with pid 2947190 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2947190 00:11:03.605 16:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2947190 00:11:04.176 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2947196 00:11:04.176 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2947196 ']' 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2947196 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2947196 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2947196' 00:11:04.177 killing process with pid 2947196 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2947196 00:11:04.177 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2947196 00:11:04.437 00:11:04.437 real 0m2.463s 00:11:04.437 user 0m2.655s 00:11:04.437 sys 0m0.881s 00:11:04.437 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.437 16:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.437 ************************************ 00:11:04.437 END TEST non_locking_app_on_locked_coremask 00:11:04.437 ************************************ 00:11:04.437 16:35:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:04.437 16:35:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:04.437 16:35:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.437 16:35:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.437 ************************************ 00:11:04.437 START TEST locking_app_on_unlocked_coremask 00:11:04.437 ************************************ 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2947590 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2947590 /var/tmp/spdk.sock 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2947590 ']' 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.437 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.437 [2024-11-05 16:35:11.447403] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:04.437 [2024-11-05 16:35:11.447453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947590 ] 00:11:04.698 [2024-11-05 16:35:11.516939] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:04.698 [2024-11-05 16:35:11.516964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.698 [2024-11-05 16:35:11.552660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2947746 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2947746 /var/tmp/spdk2.sock 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2947746 ']' 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:04.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.698 16:35:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.958 [2024-11-05 16:35:11.801994] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:04.959 [2024-11-05 16:35:11.802045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947746 ] 00:11:04.959 [2024-11-05 16:35:11.915045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.959 [2024-11-05 16:35:11.987642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.529 16:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.529 16:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:05.529 16:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2947746 00:11:05.529 16:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2947746 00:11:05.529 16:35:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:06.470 lslocks: write error 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2947590 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2947590 ']' 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2947590 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2947590 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2947590' 00:11:06.470 killing process with pid 2947590 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2947590 00:11:06.470 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2947590 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2947746 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2947746 ']' 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2947746 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2947746 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2947746' 00:11:06.730 killing process with pid 2947746 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2947746 00:11:06.730 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2947746 00:11:06.991 00:11:06.991 real 0m2.507s 00:11:06.991 user 0m2.718s 00:11:06.991 sys 0m0.878s 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:06.991 ************************************ 00:11:06.991 END TEST locking_app_on_unlocked_coremask 00:11:06.991 ************************************ 00:11:06.991 16:35:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:06.991 16:35:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:06.991 16:35:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.991 16:35:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:06.991 ************************************ 00:11:06.991 START TEST locking_app_on_locked_coremask 00:11:06.991 ************************************ 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2948275 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2948275 /var/tmp/spdk.sock 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2948275 ']' 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:06.991 16:35:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:06.991 [2024-11-05 16:35:14.055660] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:06.991 [2024-11-05 16:35:14.055714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948275 ] 00:11:07.252 [2024-11-05 16:35:14.129504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.252 [2024-11-05 16:35:14.163893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2948328 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2948328 /var/tmp/spdk2.sock 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2948328 /var/tmp/spdk2.sock 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2948328 /var/tmp/spdk2.sock 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2948328 ']' 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:07.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.823 16:35:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:08.083 [2024-11-05 16:35:14.895274] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:08.083 [2024-11-05 16:35:14.895327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948328 ] 00:11:08.083 [2024-11-05 16:35:15.005734] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2948275 has claimed it. 00:11:08.083 [2024-11-05 16:35:15.005786] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:08.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2948328) - No such process 00:11:08.654 ERROR: process (pid: 2948328) is no longer running 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2948275 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2948275 00:11:08.654 16:35:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:09.224 lslocks: write error 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2948275 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2948275 ']' 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2948275 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2948275 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2948275' 00:11:09.224 killing process with pid 2948275 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2948275 00:11:09.224 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2948275 00:11:09.484 00:11:09.484 real 0m2.365s 00:11:09.484 user 0m2.677s 00:11:09.484 sys 0m0.643s 00:11:09.484 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.484 16:35:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.484 ************************************ 00:11:09.484 END TEST locking_app_on_locked_coremask 00:11:09.484 ************************************ 00:11:09.484 16:35:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:09.484 16:35:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:09.484 16:35:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.484 16:35:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:09.484 ************************************ 00:11:09.484 START TEST locking_overlapped_coremask 00:11:09.484 ************************************ 00:11:09.484 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:11:09.484 16:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2948665 00:11:09.484 16:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2948665 /var/tmp/spdk.sock 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2948665 ']' 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.485 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.485 [2024-11-05 16:35:16.488906] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:09.485 [2024-11-05 16:35:16.488955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948665 ] 00:11:09.745 [2024-11-05 16:35:16.561279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.745 [2024-11-05 16:35:16.600718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.745 [2024-11-05 16:35:16.600743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.746 [2024-11-05 16:35:16.600753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2948840 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2948840 /var/tmp/spdk2.sock 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2948840 /var/tmp/spdk2.sock 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2948840 /var/tmp/spdk2.sock 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2948840 ']' 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:09.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.746 16:35:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.006 [2024-11-05 16:35:16.852651] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:10.006 [2024-11-05 16:35:16.852705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948840 ] 00:11:10.006 [2024-11-05 16:35:16.939699] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2948665 has claimed it. 00:11:10.006 [2024-11-05 16:35:16.939730] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:10.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2948840) - No such process 00:11:10.577 ERROR: process (pid: 2948840) is no longer running 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2948665 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2948665 ']' 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2948665 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2948665 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2948665' 00:11:10.577 killing process with pid 2948665 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2948665 00:11:10.577 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2948665 00:11:10.837 00:11:10.837 real 0m1.308s 00:11:10.837 user 0m3.688s 00:11:10.837 sys 0m0.347s 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.837 ************************************ 00:11:10.837 END TEST locking_overlapped_coremask 00:11:10.837 ************************************ 00:11:10.837 16:35:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:10.837 16:35:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:10.837 16:35:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.837 16:35:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:10.837 ************************************ 00:11:10.837 START TEST locking_overlapped_coremask_via_rpc 00:11:10.837 ************************************ 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2949023 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2949023 /var/tmp/spdk.sock 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2949023 ']' 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.837 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.838 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.838 16:35:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.838 [2024-11-05 16:35:17.875338] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:10.838 [2024-11-05 16:35:17.875387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949023 ] 00:11:11.098 [2024-11-05 16:35:17.946427] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:11.098 [2024-11-05 16:35:17.946458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.098 [2024-11-05 16:35:17.983451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.098 [2024-11-05 16:35:17.983565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.098 [2024-11-05 16:35:17.983568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2949307 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2949307 /var/tmp/spdk2.sock 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2949307 ']' 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:11.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.669 16:35:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.931 [2024-11-05 16:35:18.736265] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:11.931 [2024-11-05 16:35:18.736322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949307 ] 00:11:11.931 [2024-11-05 16:35:18.824330] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:11.931 [2024-11-05 16:35:18.824357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.931 [2024-11-05 16:35:18.887539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.931 [2024-11-05 16:35:18.887695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.931 [2024-11-05 16:35:18.887697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.502 [2024-11-05 16:35:19.535811] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2949023 has claimed it. 00:11:12.502 request: 00:11:12.502 { 00:11:12.502 "method": "framework_enable_cpumask_locks", 00:11:12.502 "req_id": 1 00:11:12.502 } 00:11:12.502 Got JSON-RPC error response 00:11:12.502 response: 00:11:12.502 { 00:11:12.502 "code": -32603, 00:11:12.502 "message": "Failed to claim CPU core: 2" 00:11:12.502 } 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2949023 /var/tmp/spdk.sock 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2949023 ']' 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:12.502 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2949307 /var/tmp/spdk2.sock 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2949307 ']' 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:12.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:12.765 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:13.027 00:11:13.027 real 0m2.085s 00:11:13.027 user 0m0.860s 00:11:13.027 sys 0m0.144s 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.027 16:35:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.027 ************************************ 00:11:13.027 END TEST locking_overlapped_coremask_via_rpc 00:11:13.027 ************************************ 00:11:13.027 16:35:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:13.027 16:35:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2949023 ]] 00:11:13.027 16:35:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2949023 00:11:13.027 16:35:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2949023 ']' 00:11:13.027 16:35:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2949023 00:11:13.027 16:35:19 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:13.027 16:35:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.027 16:35:19 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2949023 00:11:13.027 16:35:20 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:13.027 16:35:20 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:13.027 16:35:20 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2949023' 00:11:13.027 killing process with pid 2949023 00:11:13.027 16:35:20 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2949023 00:11:13.027 16:35:20 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2949023 00:11:13.287 16:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2949307 ]] 00:11:13.287 16:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2949307 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2949307 ']' 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2949307 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2949307 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2949307' 00:11:13.287 killing process with pid 2949307 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2949307 00:11:13.287 16:35:20 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2949307 00:11:13.548 16:35:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:13.549 16:35:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:13.549 16:35:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2949023 ]] 00:11:13.549 16:35:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2949023 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2949023 ']' 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2949023 00:11:13.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2949023) - No such process 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2949023 is not found' 00:11:13.549 Process with pid 2949023 is not found 00:11:13.549 16:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2949307 ]] 00:11:13.549 16:35:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2949307 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2949307 ']' 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2949307 00:11:13.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2949307) - No such process 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2949307 is not found' 00:11:13.549 Process with pid 2949307 is not found 00:11:13.549 16:35:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:13.549 00:11:13.549 real 0m15.098s 00:11:13.549 user 0m26.172s 00:11:13.549 sys 0m4.845s 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.549 16:35:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:13.549 ************************************ 00:11:13.549 END TEST cpu_locks 00:11:13.549 ************************************ 00:11:13.549 00:11:13.549 real 0m39.957s 00:11:13.549 user 1m16.866s 00:11:13.549 sys 0m8.067s 00:11:13.549 16:35:20 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.549 16:35:20 event -- common/autotest_common.sh@10 -- # set +x 00:11:13.549 ************************************ 00:11:13.549 END TEST event 00:11:13.549 ************************************ 00:11:13.549 16:35:20 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:13.549 16:35:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:13.549 16:35:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.549 16:35:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.549 ************************************ 00:11:13.549 START TEST thread 00:11:13.549 ************************************ 00:11:13.549 16:35:20 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:13.810 * Looking for test storage... 00:11:13.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:13.810 16:35:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.810 16:35:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.810 16:35:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.810 16:35:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.810 16:35:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.810 16:35:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.810 16:35:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.810 16:35:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.810 16:35:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.810 16:35:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.810 16:35:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.810 16:35:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:13.810 16:35:20 thread -- scripts/common.sh@345 -- # : 1 00:11:13.810 16:35:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.810 16:35:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.810 16:35:20 thread -- scripts/common.sh@365 -- # decimal 1 00:11:13.810 16:35:20 thread -- scripts/common.sh@353 -- # local d=1 00:11:13.810 16:35:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.810 16:35:20 thread -- scripts/common.sh@355 -- # echo 1 00:11:13.810 16:35:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.810 16:35:20 thread -- scripts/common.sh@366 -- # decimal 2 00:11:13.810 16:35:20 thread -- scripts/common.sh@353 -- # local d=2 00:11:13.810 16:35:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.810 16:35:20 thread -- scripts/common.sh@355 -- # echo 2 00:11:13.810 16:35:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.810 16:35:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.810 16:35:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.810 16:35:20 thread -- scripts/common.sh@368 -- # return 0 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:13.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.810 --rc genhtml_branch_coverage=1 00:11:13.810 --rc genhtml_function_coverage=1 00:11:13.810 --rc genhtml_legend=1 00:11:13.810 --rc geninfo_all_blocks=1 00:11:13.810 --rc geninfo_unexecuted_blocks=1 00:11:13.810 00:11:13.810 ' 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:13.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.810 --rc genhtml_branch_coverage=1 00:11:13.810 --rc genhtml_function_coverage=1 00:11:13.810 --rc genhtml_legend=1 00:11:13.810 --rc geninfo_all_blocks=1 00:11:13.810 --rc geninfo_unexecuted_blocks=1 00:11:13.810 00:11:13.810 ' 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:13.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.810 --rc genhtml_branch_coverage=1 00:11:13.810 --rc genhtml_function_coverage=1 00:11:13.810 --rc genhtml_legend=1 00:11:13.810 --rc geninfo_all_blocks=1 00:11:13.810 --rc geninfo_unexecuted_blocks=1 00:11:13.810 00:11:13.810 ' 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:13.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.810 --rc genhtml_branch_coverage=1 00:11:13.810 --rc genhtml_function_coverage=1 00:11:13.810 --rc genhtml_legend=1 00:11:13.810 --rc geninfo_all_blocks=1 00:11:13.810 --rc geninfo_unexecuted_blocks=1 00:11:13.810 00:11:13.810 ' 00:11:13.810 16:35:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.810 16:35:20 thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.810 ************************************ 00:11:13.810 START TEST thread_poller_perf 00:11:13.810 ************************************ 00:11:13.810 16:35:20 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:13.810 [2024-11-05 16:35:20.866786] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:13.810 [2024-11-05 16:35:20.866922] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949804 ] 00:11:14.071 [2024-11-05 16:35:20.941563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.071 [2024-11-05 16:35:20.977374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.071 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:15.012 [2024-11-05T15:35:22.075Z] ====================================== 00:11:15.012 [2024-11-05T15:35:22.075Z] busy:2412714300 (cyc) 00:11:15.012 [2024-11-05T15:35:22.075Z] total_run_count: 287000 00:11:15.012 [2024-11-05T15:35:22.075Z] tsc_hz: 2400000000 (cyc) 00:11:15.012 [2024-11-05T15:35:22.075Z] ====================================== 00:11:15.012 [2024-11-05T15:35:22.075Z] poller_cost: 8406 (cyc), 3502 (nsec) 00:11:15.012 00:11:15.012 real 0m1.175s 00:11:15.012 user 0m1.106s 00:11:15.012 sys 0m0.064s 00:11:15.012 16:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.012 16:35:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:15.012 ************************************ 00:11:15.012 END TEST thread_poller_perf 00:11:15.012 ************************************ 00:11:15.012 16:35:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:15.012 16:35:22 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:15.012 16:35:22 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.012 16:35:22 thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.274 ************************************ 00:11:15.274 START TEST thread_poller_perf 00:11:15.274 ************************************ 00:11:15.274 16:35:22 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:15.274 [2024-11-05 16:35:22.116397] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:15.274 [2024-11-05 16:35:22.116482] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950133 ] 00:11:15.274 [2024-11-05 16:35:22.192138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.274 [2024-11-05 16:35:22.228336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.274 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:16.216 [2024-11-05T15:35:23.279Z] ====================================== 00:11:16.216 [2024-11-05T15:35:23.279Z] busy:2402022994 (cyc) 00:11:16.216 [2024-11-05T15:35:23.279Z] total_run_count: 3817000 00:11:16.216 [2024-11-05T15:35:23.279Z] tsc_hz: 2400000000 (cyc) 00:11:16.216 [2024-11-05T15:35:23.279Z] ====================================== 00:11:16.216 [2024-11-05T15:35:23.279Z] poller_cost: 629 (cyc), 262 (nsec) 00:11:16.216 00:11:16.216 real 0m1.165s 00:11:16.216 user 0m1.096s 00:11:16.216 sys 0m0.066s 00:11:16.216 16:35:23 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.216 16:35:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:16.216 ************************************ 00:11:16.216 END TEST thread_poller_perf 00:11:16.216 ************************************ 00:11:16.478 16:35:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:16.478 00:11:16.478 real 0m2.698s 00:11:16.478 user 0m2.377s 00:11:16.478 sys 0m0.335s 00:11:16.478 16:35:23 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.478 16:35:23 thread -- common/autotest_common.sh@10 -- # set +x 00:11:16.478 ************************************ 00:11:16.478 END TEST thread 00:11:16.478 ************************************ 00:11:16.478 16:35:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:16.478 16:35:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:11:16.478 16:35:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:16.478 16:35:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:16.478 16:35:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.478 ************************************ 00:11:16.478 START TEST app_cmdline 00:11:16.478 ************************************ 00:11:16.478 16:35:23 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:11:16.478 * Looking for test storage... 00:11:16.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:16.478 16:35:23 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:16.478 16:35:23 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:11:16.478 16:35:23 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.740 16:35:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.740 --rc genhtml_branch_coverage=1 00:11:16.740 --rc genhtml_function_coverage=1 00:11:16.740 --rc genhtml_legend=1 00:11:16.740 --rc geninfo_all_blocks=1 00:11:16.740 --rc geninfo_unexecuted_blocks=1 00:11:16.740 00:11:16.740 ' 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.740 --rc genhtml_branch_coverage=1 00:11:16.740 --rc genhtml_function_coverage=1 00:11:16.740 --rc genhtml_legend=1 00:11:16.740 --rc geninfo_all_blocks=1 00:11:16.740 --rc geninfo_unexecuted_blocks=1 00:11:16.740 00:11:16.740 ' 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.740 --rc genhtml_branch_coverage=1 00:11:16.740 --rc genhtml_function_coverage=1 00:11:16.740 --rc genhtml_legend=1 00:11:16.740 --rc geninfo_all_blocks=1 00:11:16.740 --rc geninfo_unexecuted_blocks=1 00:11:16.740 00:11:16.740 ' 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.740 --rc genhtml_branch_coverage=1 00:11:16.740 --rc genhtml_function_coverage=1 00:11:16.740 --rc genhtml_legend=1 00:11:16.740 --rc geninfo_all_blocks=1 00:11:16.740 --rc geninfo_unexecuted_blocks=1 00:11:16.740 00:11:16.740 ' 00:11:16.740 16:35:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:16.740 16:35:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2950415 00:11:16.740 16:35:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2950415 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2950415 ']' 00:11:16.740 16:35:23 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:16.740 16:35:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:16.740 [2024-11-05 16:35:23.641776] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:16.741 [2024-11-05 16:35:23.641848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950415 ] 00:11:16.741 [2024-11-05 16:35:23.718457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.741 [2024-11-05 16:35:23.760971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:17.685 { 00:11:17.685 "version": "SPDK v25.01-pre git sha1 dbbc706e0", 00:11:17.685 "fields": { 00:11:17.685 "major": 25, 00:11:17.685 "minor": 1, 00:11:17.685 "patch": 0, 00:11:17.685 "suffix": "-pre", 00:11:17.685 "commit": "dbbc706e0" 00:11:17.685 } 00:11:17.685 } 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:17.685 16:35:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:17.685 16:35:24 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:17.947 request: 00:11:17.947 { 00:11:17.947 "method": "env_dpdk_get_mem_stats", 00:11:17.947 "req_id": 1 00:11:17.947 } 00:11:17.947 Got JSON-RPC error response 00:11:17.947 response: 00:11:17.947 { 00:11:17.947 "code": -32601, 00:11:17.947 "message": "Method not found" 00:11:17.947 } 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:17.947 16:35:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2950415 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2950415 ']' 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2950415 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2950415 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2950415' 00:11:17.947 killing process with pid 2950415 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@971 -- # kill 2950415 00:11:17.947 16:35:24 app_cmdline -- common/autotest_common.sh@976 -- # wait 2950415 00:11:18.209 00:11:18.209 real 0m1.751s 00:11:18.209 user 0m2.077s 00:11:18.209 sys 0m0.491s 00:11:18.209 16:35:25 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.209 16:35:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:18.209 ************************************ 00:11:18.209 END TEST app_cmdline 00:11:18.209 ************************************ 00:11:18.209 16:35:25 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:11:18.209 16:35:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:18.209 16:35:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.209 16:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:18.209 ************************************ 00:11:18.209 START TEST version 00:11:18.209 ************************************ 00:11:18.209 16:35:25 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:11:18.470 * Looking for test storage... 00:11:18.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.470 16:35:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.470 16:35:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.470 16:35:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.470 16:35:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.470 16:35:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.470 16:35:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.470 16:35:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.470 16:35:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.470 16:35:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.470 16:35:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.470 16:35:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.470 16:35:25 version -- scripts/common.sh@344 -- # case "$op" in 00:11:18.470 16:35:25 version -- scripts/common.sh@345 -- # : 1 00:11:18.470 16:35:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.470 16:35:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.470 16:35:25 version -- scripts/common.sh@365 -- # decimal 1 00:11:18.470 16:35:25 version -- scripts/common.sh@353 -- # local d=1 00:11:18.470 16:35:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.470 16:35:25 version -- scripts/common.sh@355 -- # echo 1 00:11:18.470 16:35:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.470 16:35:25 version -- scripts/common.sh@366 -- # decimal 2 00:11:18.470 16:35:25 version -- scripts/common.sh@353 -- # local d=2 00:11:18.470 16:35:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.470 16:35:25 version -- scripts/common.sh@355 -- # echo 2 00:11:18.470 16:35:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.470 16:35:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.470 16:35:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.470 16:35:25 version -- scripts/common.sh@368 -- # return 0 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.470 --rc genhtml_branch_coverage=1 00:11:18.470 --rc genhtml_function_coverage=1 00:11:18.470 --rc genhtml_legend=1 00:11:18.470 --rc geninfo_all_blocks=1 00:11:18.470 --rc geninfo_unexecuted_blocks=1 00:11:18.470 00:11:18.470 ' 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.470 --rc genhtml_branch_coverage=1 00:11:18.470 --rc genhtml_function_coverage=1 00:11:18.470 --rc genhtml_legend=1 00:11:18.470 --rc geninfo_all_blocks=1 00:11:18.470 --rc geninfo_unexecuted_blocks=1 00:11:18.470 00:11:18.470 ' 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.470 --rc genhtml_branch_coverage=1 00:11:18.470 --rc genhtml_function_coverage=1 00:11:18.470 --rc genhtml_legend=1 00:11:18.470 --rc geninfo_all_blocks=1 00:11:18.470 --rc geninfo_unexecuted_blocks=1 00:11:18.470 00:11:18.470 ' 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.470 --rc genhtml_branch_coverage=1 00:11:18.470 --rc genhtml_function_coverage=1 00:11:18.470 --rc genhtml_legend=1 00:11:18.470 --rc geninfo_all_blocks=1 00:11:18.470 --rc geninfo_unexecuted_blocks=1 00:11:18.470 00:11:18.470 ' 00:11:18.470 16:35:25 version -- app/version.sh@17 -- # get_header_version major 00:11:18.470 16:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # cut -f2 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.470 16:35:25 version -- app/version.sh@17 -- # major=25 00:11:18.470 16:35:25 version -- app/version.sh@18 -- # get_header_version minor 00:11:18.470 16:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # cut -f2 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.470 16:35:25 version -- app/version.sh@18 -- # minor=1 00:11:18.470 16:35:25 version -- app/version.sh@19 -- # get_header_version patch 00:11:18.470 16:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # cut -f2 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.470 16:35:25 version -- app/version.sh@19 -- # patch=0 00:11:18.470 16:35:25 version -- app/version.sh@20 -- # get_header_version suffix 00:11:18.470 16:35:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # cut -f2 00:11:18.470 16:35:25 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.470 16:35:25 version -- app/version.sh@20 -- # suffix=-pre 00:11:18.470 16:35:25 version -- app/version.sh@22 -- # version=25.1 00:11:18.470 16:35:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:18.470 16:35:25 version -- app/version.sh@28 -- # version=25.1rc0 00:11:18.470 16:35:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:18.470 16:35:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:18.470 16:35:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:18.470 16:35:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:18.470 00:11:18.470 real 0m0.283s 00:11:18.470 user 0m0.171s 00:11:18.470 sys 0m0.163s 00:11:18.470 16:35:25 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.470 16:35:25 version -- common/autotest_common.sh@10 -- # set +x 00:11:18.470 ************************************ 00:11:18.470 END TEST version 00:11:18.470 ************************************ 00:11:18.470 16:35:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:18.470 16:35:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:18.470 16:35:25 -- spdk/autotest.sh@194 -- # uname -s 00:11:18.732 16:35:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:18.732 16:35:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:18.732 16:35:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:18.732 16:35:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:11:18.732 16:35:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:18.732 16:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 16:35:25 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:11:18.732 16:35:25 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:11:18.732 16:35:25 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:18.732 16:35:25 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:18.732 16:35:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.732 16:35:25 -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 ************************************ 00:11:18.732 START TEST nvmf_tcp 00:11:18.732 ************************************ 00:11:18.732 16:35:25 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:18.732 * Looking for test storage... 00:11:18.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:18.732 16:35:25 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.732 16:35:25 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.732 16:35:25 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.993 16:35:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.993 --rc genhtml_branch_coverage=1 00:11:18.993 --rc genhtml_function_coverage=1 00:11:18.993 --rc genhtml_legend=1 00:11:18.993 --rc geninfo_all_blocks=1 00:11:18.993 --rc geninfo_unexecuted_blocks=1 00:11:18.993 00:11:18.993 ' 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.993 --rc genhtml_branch_coverage=1 00:11:18.993 --rc genhtml_function_coverage=1 00:11:18.993 --rc genhtml_legend=1 00:11:18.993 --rc geninfo_all_blocks=1 00:11:18.993 --rc geninfo_unexecuted_blocks=1 00:11:18.993 00:11:18.993 ' 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.993 --rc genhtml_branch_coverage=1 00:11:18.993 --rc genhtml_function_coverage=1 00:11:18.993 --rc genhtml_legend=1 00:11:18.993 --rc geninfo_all_blocks=1 00:11:18.993 --rc geninfo_unexecuted_blocks=1 00:11:18.993 00:11:18.993 ' 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.993 --rc genhtml_branch_coverage=1 00:11:18.993 --rc genhtml_function_coverage=1 00:11:18.993 --rc genhtml_legend=1 00:11:18.993 --rc geninfo_all_blocks=1 00:11:18.993 --rc geninfo_unexecuted_blocks=1 00:11:18.993 00:11:18.993 ' 00:11:18.993 16:35:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:18.993 16:35:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:18.993 16:35:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.993 16:35:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.993 ************************************ 00:11:18.993 START TEST nvmf_target_core 00:11:18.993 ************************************ 00:11:18.993 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:18.993 * Looking for test storage... 00:11:18.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:18.993 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.993 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.993 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.993 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.256 --rc genhtml_branch_coverage=1 00:11:19.256 --rc genhtml_function_coverage=1 00:11:19.256 --rc genhtml_legend=1 00:11:19.256 --rc geninfo_all_blocks=1 00:11:19.256 --rc geninfo_unexecuted_blocks=1 00:11:19.256 00:11:19.256 ' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.256 --rc genhtml_branch_coverage=1 00:11:19.256 --rc genhtml_function_coverage=1 00:11:19.256 --rc genhtml_legend=1 00:11:19.256 --rc geninfo_all_blocks=1 00:11:19.256 --rc geninfo_unexecuted_blocks=1 00:11:19.256 00:11:19.256 ' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.256 --rc genhtml_branch_coverage=1 00:11:19.256 --rc genhtml_function_coverage=1 00:11:19.256 --rc genhtml_legend=1 00:11:19.256 --rc geninfo_all_blocks=1 00:11:19.256 --rc geninfo_unexecuted_blocks=1 00:11:19.256 00:11:19.256 ' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.256 --rc genhtml_branch_coverage=1 00:11:19.256 --rc genhtml_function_coverage=1 00:11:19.256 --rc genhtml_legend=1 00:11:19.256 --rc geninfo_all_blocks=1 00:11:19.256 --rc geninfo_unexecuted_blocks=1 00:11:19.256 00:11:19.256 ' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:19.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.256 ************************************ 00:11:19.256 START TEST nvmf_abort 00:11:19.256 ************************************ 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:19.256 * Looking for test storage... 00:11:19.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.256 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.257 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:19.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.518 --rc genhtml_branch_coverage=1 00:11:19.518 --rc genhtml_function_coverage=1 00:11:19.518 --rc genhtml_legend=1 00:11:19.518 --rc geninfo_all_blocks=1 00:11:19.518 --rc geninfo_unexecuted_blocks=1 00:11:19.518 00:11:19.518 ' 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:19.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.518 --rc genhtml_branch_coverage=1 00:11:19.518 --rc genhtml_function_coverage=1 00:11:19.518 --rc genhtml_legend=1 00:11:19.518 --rc geninfo_all_blocks=1 00:11:19.518 --rc geninfo_unexecuted_blocks=1 00:11:19.518 00:11:19.518 ' 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:19.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.518 --rc genhtml_branch_coverage=1 00:11:19.518 --rc genhtml_function_coverage=1 00:11:19.518 --rc genhtml_legend=1 00:11:19.518 --rc geninfo_all_blocks=1 00:11:19.518 --rc geninfo_unexecuted_blocks=1 00:11:19.518 00:11:19.518 ' 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:19.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.518 --rc genhtml_branch_coverage=1 00:11:19.518 --rc genhtml_function_coverage=1 00:11:19.518 --rc genhtml_legend=1 00:11:19.518 --rc geninfo_all_blocks=1 00:11:19.518 --rc geninfo_unexecuted_blocks=1 00:11:19.518 00:11:19.518 ' 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.518 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:19.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:11:19.519 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:27.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:27.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:27.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:27.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:27.687 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:27.688 10.0.0.1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:27.688 10.0.0.2 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:27.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.689 ms 00:11:27.688 00:11:27.688 --- 10.0.0.1 ping statistics --- 00:11:27.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.688 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:27.688 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:27.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:11:27.689 00:11:27.689 --- 10.0.0.2 ping statistics --- 00:11:27.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.689 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:11:27.689 ' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=2954901 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 2954901 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2954901 ']' 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:27.689 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.689 [2024-11-05 16:35:33.974927] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:27.690 [2024-11-05 16:35:33.974997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.690 [2024-11-05 16:35:34.075532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.690 [2024-11-05 16:35:34.129494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.690 [2024-11-05 16:35:34.129549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.690 [2024-11-05 16:35:34.129558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.690 [2024-11-05 16:35:34.129565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.690 [2024-11-05 16:35:34.129571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.690 [2024-11-05 16:35:34.131397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.690 [2024-11-05 16:35:34.131569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.690 [2024-11-05 16:35:34.131570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 [2024-11-05 16:35:34.834768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 Malloc0 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 Delay0 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 [2024-11-05 16:35:34.916780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:27.950 [2024-11-05 16:35:35.004808] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:30.492 Initializing NVMe Controllers 00:11:30.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:30.492 controller IO queue size 128 less than required 00:11:30.492 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:30.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:30.492 Initialization complete. Launching workers. 00:11:30.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 27831 00:11:30.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27893, failed to submit 62 00:11:30.492 success 27835, unsuccessful 58, failed 0 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:30.492 rmmod nvme_tcp 00:11:30.492 rmmod nvme_fabrics 00:11:30.492 rmmod nvme_keyring 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 2954901 ']' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 2954901 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2954901 ']' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2954901 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2954901 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2954901' 00:11:30.492 killing process with pid 2954901 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2954901 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2954901 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:30.492 16:35:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:11:33.038 00:11:33.038 real 0m13.390s 00:11:33.038 user 0m14.174s 00:11:33.038 sys 0m6.440s 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.038 ************************************ 00:11:33.038 END TEST nvmf_abort 00:11:33.038 ************************************ 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.038 ************************************ 00:11:33.038 START TEST nvmf_ns_hotplug_stress 00:11:33.038 ************************************ 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:33.038 * Looking for test storage... 00:11:33.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.038 --rc genhtml_branch_coverage=1 00:11:33.038 --rc genhtml_function_coverage=1 00:11:33.038 --rc genhtml_legend=1 00:11:33.038 --rc geninfo_all_blocks=1 00:11:33.038 --rc geninfo_unexecuted_blocks=1 00:11:33.038 00:11:33.038 ' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.038 --rc genhtml_branch_coverage=1 00:11:33.038 --rc genhtml_function_coverage=1 00:11:33.038 --rc genhtml_legend=1 00:11:33.038 --rc geninfo_all_blocks=1 00:11:33.038 --rc geninfo_unexecuted_blocks=1 00:11:33.038 00:11:33.038 ' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.038 --rc genhtml_branch_coverage=1 00:11:33.038 --rc genhtml_function_coverage=1 00:11:33.038 --rc genhtml_legend=1 00:11:33.038 --rc geninfo_all_blocks=1 00:11:33.038 --rc geninfo_unexecuted_blocks=1 00:11:33.038 00:11:33.038 ' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.038 --rc genhtml_branch_coverage=1 00:11:33.038 --rc genhtml_function_coverage=1 00:11:33.038 --rc genhtml_legend=1 00:11:33.038 --rc geninfo_all_blocks=1 00:11:33.038 --rc geninfo_unexecuted_blocks=1 00:11:33.038 00:11:33.038 ' 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.038 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:33.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:11:33.039 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:39.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:39.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:39.753 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.753 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:39.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:39.754 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:40.016 10.0.0.1 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:40.016 10.0.0.2 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:40.016 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:40.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.593 ms 00:11:40.016 00:11:40.016 --- 10.0.0.1 ping statistics --- 00:11:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.016 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:40.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:11:40.016 00:11:40.016 --- 10.0.0.2 ping statistics --- 00:11:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.016 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:40.016 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:40.017 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:11:40.279 ' 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:40.279 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=2959828 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 2959828 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2959828 ']' 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.280 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.280 [2024-11-05 16:35:47.235865] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:11:40.280 [2024-11-05 16:35:47.235931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.280 [2024-11-05 16:35:47.336783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:40.541 [2024-11-05 16:35:47.387289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.541 [2024-11-05 16:35:47.387344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.541 [2024-11-05 16:35:47.387353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.541 [2024-11-05 16:35:47.387360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.541 [2024-11-05 16:35:47.387366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.541 [2024-11-05 16:35:47.389169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.541 [2024-11-05 16:35:47.389334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.541 [2024-11-05 16:35:47.389335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:41.114 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:41.374 [2024-11-05 16:35:48.225184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.374 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:41.636 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.636 [2024-11-05 16:35:48.602721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.636 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.897 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:42.158 Malloc0 00:11:42.158 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:42.158 Delay0 00:11:42.158 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.420 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:42.681 NULL1 00:11:42.681 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:42.941 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2960517 00:11:42.941 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:42.941 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:42.941 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.885 Read completed with error (sct=0, sc=11) 00:11:43.885 16:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.146 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:44.146 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:44.408 true 00:11:44.408 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:44.408 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.351 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.351 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:45.351 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:45.612 true 00:11:45.612 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:45.612 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.873 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.873 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:45.873 16:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:46.134 true 00:11:46.134 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:46.134 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.338 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.338 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:47.338 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:47.598 true 00:11:47.598 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:47.598 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.541 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.541 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:48.541 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:48.803 true 00:11:48.803 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:48.803 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.064 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.064 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:49.064 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:49.326 true 00:11:49.326 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:49.326 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.710 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:50.710 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:50.710 true 00:11:50.710 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:50.710 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.651 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.911 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:51.911 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:51.911 true 00:11:51.911 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:51.911 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.172 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.432 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:52.432 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:52.432 true 00:11:52.693 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:52.693 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.693 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.954 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:52.954 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:53.213 true 00:11:53.213 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:53.213 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.213 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.472 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:53.472 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:53.732 true 00:11:53.732 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:53.732 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.732 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.992 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:53.992 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:54.251 true 00:11:54.251 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:54.251 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.511 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.511 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:54.511 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:54.770 true 00:11:54.770 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:54.770 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.030 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.030 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:55.030 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:55.290 true 00:11:55.290 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:55.290 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:56.234 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:56.234 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:56.234 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:56.493 true 00:11:56.493 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:56.493 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.752 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.752 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:56.752 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:57.012 true 00:11:57.012 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:57.012 16:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.272 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.272 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:57.272 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:57.531 true 00:11:57.531 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:57.531 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.791 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.791 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:57.791 16:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:58.051 true 00:11:58.051 16:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:58.051 16:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 16:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.429 16:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:59.429 16:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:59.688 true 00:11:59.688 16:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:11:59.688 16:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.627 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.627 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:00.627 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:00.627 true 00:12:00.888 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:00.888 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.888 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.148 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:01.148 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:01.407 true 00:12:01.407 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:01.407 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.347 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.607 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:02.607 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:02.867 true 00:12:02.867 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:02.867 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.832 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.832 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:03.832 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:03.832 true 00:12:04.092 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:04.092 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.092 16:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.352 16:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:04.352 16:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:04.611 true 00:12:04.611 16:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:04.611 16:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.995 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.995 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:05.995 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:05.995 true 00:12:05.995 16:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:05.995 16:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.936 16:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.196 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:07.196 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:07.196 true 00:12:07.196 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:07.197 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.456 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.716 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:07.716 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:07.716 true 00:12:07.716 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:07.716 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.976 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.236 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:08.236 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:08.236 true 00:12:08.236 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:08.236 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.497 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.758 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:08.758 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:08.758 true 00:12:09.018 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:09.018 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.958 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.217 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:12:10.217 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:10.477 true 00:12:10.477 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:10.477 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.414 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.414 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:12:11.414 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:12:11.674 true 00:12:11.674 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:11.674 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.674 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.933 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:12:11.934 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:12:12.192 true 00:12:12.192 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:12.192 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.192 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.452 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:12:12.453 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:12:12.711 true 00:12:12.711 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:12.711 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.711 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.971 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:12:12.971 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:12:13.232 Initializing NVMe Controllers 00:12:13.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:13.232 Controller IO queue size 128, less than required. 00:12:13.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:13.232 Controller IO queue size 128, less than required. 00:12:13.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:13.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:13.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:13.232 Initialization complete. Launching workers. 00:12:13.232 ======================================================== 00:12:13.232 Latency(us) 00:12:13.232 Device Information : IOPS MiB/s Average min max 00:12:13.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1875.83 0.92 36797.59 1775.40 1102975.99 00:12:13.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15832.57 7.73 8084.18 1658.34 498454.43 00:12:13.232 ======================================================== 00:12:13.232 Total : 17708.40 8.65 11125.76 1658.34 1102975.99 00:12:13.232 00:12:13.232 true 00:12:13.232 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2960517 00:12:13.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2960517) - No such process 00:12:13.232 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2960517 00:12:13.232 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.232 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:13.492 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:13.492 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:13.492 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:13.492 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:13.492 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:13.753 null0 00:12:13.753 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:13.753 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:13.753 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:13.753 null1 00:12:14.013 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.013 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.013 16:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:14.013 null2 00:12:14.013 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.013 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.013 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:14.274 null3 00:12:14.274 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.274 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.274 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:14.274 null4 00:12:14.535 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.535 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.535 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:14.535 null5 00:12:14.535 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.535 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.535 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:14.796 null6 00:12:14.796 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:14.796 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:14.796 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:15.057 null7 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:15.057 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2967571 2967572 2967574 2967577 2967578 2967580 2967582 2967584 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.058 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:15.058 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.319 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.580 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:15.841 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:16.102 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:16.102 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:16.102 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.102 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.102 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.364 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:16.625 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:16.886 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.233 16:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.233 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.548 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:17.823 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.824 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.085 16:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.085 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.346 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.606 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:18.607 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:18.868 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:18.868 rmmod nvme_tcp 00:12:18.868 rmmod nvme_fabrics 00:12:19.129 rmmod nvme_keyring 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 2959828 ']' 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 2959828 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2959828 ']' 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2959828 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:12:19.129 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:19.130 16:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2959828 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2959828' 00:12:19.130 killing process with pid 2959828 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2959828 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2959828 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:19.130 16:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:12:21.676 00:12:21.676 real 0m48.624s 00:12:21.676 user 3m12.684s 00:12:21.676 sys 0m15.653s 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.676 ************************************ 00:12:21.676 END TEST nvmf_ns_hotplug_stress 00:12:21.676 ************************************ 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:21.676 ************************************ 00:12:21.676 START TEST nvmf_delete_subsystem 00:12:21.676 ************************************ 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:21.676 * Looking for test storage... 00:12:21.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.676 --rc genhtml_branch_coverage=1 00:12:21.676 --rc genhtml_function_coverage=1 00:12:21.676 --rc genhtml_legend=1 00:12:21.676 --rc geninfo_all_blocks=1 00:12:21.676 --rc geninfo_unexecuted_blocks=1 00:12:21.676 00:12:21.676 ' 00:12:21.676 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.676 --rc genhtml_branch_coverage=1 00:12:21.676 --rc genhtml_function_coverage=1 00:12:21.676 --rc genhtml_legend=1 00:12:21.676 --rc geninfo_all_blocks=1 00:12:21.676 --rc geninfo_unexecuted_blocks=1 00:12:21.676 00:12:21.676 ' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.677 --rc genhtml_branch_coverage=1 00:12:21.677 --rc genhtml_function_coverage=1 00:12:21.677 --rc genhtml_legend=1 00:12:21.677 --rc geninfo_all_blocks=1 00:12:21.677 --rc geninfo_unexecuted_blocks=1 00:12:21.677 00:12:21.677 ' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.677 --rc genhtml_branch_coverage=1 00:12:21.677 --rc genhtml_function_coverage=1 00:12:21.677 --rc genhtml_legend=1 00:12:21.677 --rc geninfo_all_blocks=1 00:12:21.677 --rc geninfo_unexecuted_blocks=1 00:12:21.677 00:12:21.677 ' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:21.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:12:21.677 16:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:29.832 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:29.832 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:29.832 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:29.832 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:29.832 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:29.833 10.0.0.1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:29.833 10.0.0.2 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:29.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.679 ms 00:12:29.833 00:12:29.833 --- 10.0.0.1 ping statistics --- 00:12:29.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.833 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:29.833 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:29.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:12:29.834 00:12:29.834 --- 10.0.0.2 ping statistics --- 00:12:29.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.834 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:12:29.834 ' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=2972801 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 2972801 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2972801 ']' 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:29.834 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 [2024-11-05 16:36:36.006111] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:12:29.834 [2024-11-05 16:36:36.006180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.834 [2024-11-05 16:36:36.090922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:29.834 [2024-11-05 16:36:36.131654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.834 [2024-11-05 16:36:36.131691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.834 [2024-11-05 16:36:36.131700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.834 [2024-11-05 16:36:36.131707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.834 [2024-11-05 16:36:36.131713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.834 [2024-11-05 16:36:36.132883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.834 [2024-11-05 16:36:36.132902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.834 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:29.834 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:12:29.834 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.835 [2024-11-05 16:36:36.838655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.835 [2024-11-05 16:36:36.862872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.835 NULL1 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.835 Delay0 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.835 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.095 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2973125 00:12:30.095 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:30.095 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:30.095 [2024-11-05 16:36:36.969727] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:32.008 16:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.008 16:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.008 16:36:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 starting I/O failed: -6 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 [2024-11-05 16:36:39.094657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a77960 is same with the state(6) to be set 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Write completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.270 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 starting I/O failed: -6 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 [2024-11-05 16:36:39.098567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f63ec000c40 is same with the state(6) to be set 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:32.271 Write completed with error (sct=0, sc=8) 00:12:32.271 Read completed with error (sct=0, sc=8) 00:12:33.213 [2024-11-05 16:36:40.066636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a789a0 is same with the state(6) to be set 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 [2024-11-05 16:36:40.098175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a77780 is same with the state(6) to be set 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 [2024-11-05 16:36:40.098775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a77b40 is same with the state(6) to be set 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 [2024-11-05 16:36:40.100665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f63ec00d680 is same with the state(6) to be set 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Write completed with error (sct=0, sc=8) 00:12:33.213 Read completed with error (sct=0, sc=8) 00:12:33.214 [2024-11-05 16:36:40.101018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f63ec00d020 is same with the state(6) to be set 00:12:33.214 Initializing NVMe Controllers 00:12:33.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.214 Controller IO queue size 128, less than required. 00:12:33.214 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:33.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:33.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:33.214 Initialization complete. Launching workers. 00:12:33.214 ======================================================== 00:12:33.214 Latency(us) 00:12:33.214 Device Information : IOPS MiB/s Average min max 00:12:33.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.23 0.09 879725.41 249.42 1006742.54 00:12:33.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.82 0.08 928101.97 281.45 1010079.52 00:12:33.214 ======================================================== 00:12:33.214 Total : 333.05 0.16 902358.98 249.42 1010079.52 00:12:33.214 00:12:33.214 [2024-11-05 16:36:40.101440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a789a0 (9): Bad file descriptor 00:12:33.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:33.214 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.214 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:33.214 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2973125 00:12:33.214 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2973125 00:12:33.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2973125) - No such process 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2973125 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2973125 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2973125 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:33.785 [2024-11-05 16:36:40.634632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2973831 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:33.785 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.785 [2024-11-05 16:36:40.710232] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:34.357 16:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:34.357 16:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:34.357 16:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:34.617 16:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:34.617 16:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:34.617 16:36:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:35.188 16:36:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:35.188 16:36:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:35.188 16:36:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:35.759 16:36:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:35.759 16:36:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:35.759 16:36:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:36.330 16:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:36.330 16:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:36.330 16:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:36.902 16:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:36.902 16:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:36.902 16:36:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:36.902 Initializing NVMe Controllers 00:12:36.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:36.902 Controller IO queue size 128, less than required. 00:12:36.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:36.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:36.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:36.902 Initialization complete. Launching workers. 00:12:36.902 ======================================================== 00:12:36.902 Latency(us) 00:12:36.902 Device Information : IOPS MiB/s Average min max 00:12:36.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001955.48 1000178.19 1006322.28 00:12:36.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003117.04 1000219.42 1042317.30 00:12:36.902 ======================================================== 00:12:36.902 Total : 256.00 0.12 1002536.26 1000178.19 1042317.30 00:12:36.902 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2973831 00:12:37.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2973831) - No such process 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2973831 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:37.164 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:37.164 rmmod nvme_tcp 00:12:37.164 rmmod nvme_fabrics 00:12:37.424 rmmod nvme_keyring 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 2972801 ']' 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 2972801 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2972801 ']' 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2972801 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2972801 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2972801' 00:12:37.424 killing process with pid 2972801 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2972801 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2972801 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:37.424 16:36:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:12:39.972 00:12:39.972 real 0m18.211s 00:12:39.972 user 0m30.749s 00:12:39.972 sys 0m6.654s 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.972 ************************************ 00:12:39.972 END TEST nvmf_delete_subsystem 00:12:39.972 ************************************ 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.972 ************************************ 00:12:39.972 START TEST nvmf_host_management 00:12:39.972 ************************************ 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:39.972 * Looking for test storage... 00:12:39.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.972 --rc genhtml_branch_coverage=1 00:12:39.972 --rc genhtml_function_coverage=1 00:12:39.972 --rc genhtml_legend=1 00:12:39.972 --rc geninfo_all_blocks=1 00:12:39.972 --rc geninfo_unexecuted_blocks=1 00:12:39.972 00:12:39.972 ' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.972 --rc genhtml_branch_coverage=1 00:12:39.972 --rc genhtml_function_coverage=1 00:12:39.972 --rc genhtml_legend=1 00:12:39.972 --rc geninfo_all_blocks=1 00:12:39.972 --rc geninfo_unexecuted_blocks=1 00:12:39.972 00:12:39.972 ' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.972 --rc genhtml_branch_coverage=1 00:12:39.972 --rc genhtml_function_coverage=1 00:12:39.972 --rc genhtml_legend=1 00:12:39.972 --rc geninfo_all_blocks=1 00:12:39.972 --rc geninfo_unexecuted_blocks=1 00:12:39.972 00:12:39.972 ' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.972 --rc genhtml_branch_coverage=1 00:12:39.972 --rc genhtml_function_coverage=1 00:12:39.972 --rc genhtml_legend=1 00:12:39.972 --rc geninfo_all_blocks=1 00:12:39.972 --rc geninfo_unexecuted_blocks=1 00:12:39.972 00:12:39.972 ' 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.972 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:39.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:12:39.973 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:48.116 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:48.116 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:48.116 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:48.116 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:48.116 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:48.117 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:48.117 10.0.0.1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:48.117 10.0.0.2 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:48.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.566 ms 00:12:48.117 00:12:48.117 --- 10.0.0.1 ping statistics --- 00:12:48.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.117 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:12:48.117 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:48.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:12:48.118 00:12:48.118 --- 10.0.0.2 ping statistics --- 00:12:48.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.118 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:12:48.118 ' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=2978876 00:12:48.118 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 2978876 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2978876 ']' 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.119 16:36:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 [2024-11-05 16:36:54.422496] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:12:48.119 [2024-11-05 16:36:54.422566] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.119 [2024-11-05 16:36:54.526494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.119 [2024-11-05 16:36:54.580511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.119 [2024-11-05 16:36:54.580566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.119 [2024-11-05 16:36:54.580575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.119 [2024-11-05 16:36:54.580583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.119 [2024-11-05 16:36:54.580590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.119 [2024-11-05 16:36:54.582981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.119 [2024-11-05 16:36:54.583269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.119 [2024-11-05 16:36:54.583450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:48.119 [2024-11-05 16:36:54.583452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.380 [2024-11-05 16:36:55.243118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.380 Malloc0 00:12:48.380 [2024-11-05 16:36:55.317928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2979055 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2979055 /var/tmp/bdevperf.sock 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2979055 ']' 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:48.380 { 00:12:48.380 "params": { 00:12:48.380 "name": "Nvme$subsystem", 00:12:48.380 "trtype": "$TEST_TRANSPORT", 00:12:48.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:48.380 "adrfam": "ipv4", 00:12:48.380 "trsvcid": "$NVMF_PORT", 00:12:48.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:48.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:48.380 "hdgst": ${hdgst:-false}, 00:12:48.380 "ddgst": ${ddgst:-false} 00:12:48.380 }, 00:12:48.380 "method": "bdev_nvme_attach_controller" 00:12:48.380 } 00:12:48.380 EOF 00:12:48.380 )") 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:12:48.380 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:48.380 "params": { 00:12:48.380 "name": "Nvme0", 00:12:48.380 "trtype": "tcp", 00:12:48.380 "traddr": "10.0.0.2", 00:12:48.380 "adrfam": "ipv4", 00:12:48.380 "trsvcid": "4420", 00:12:48.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:48.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:48.380 "hdgst": false, 00:12:48.380 "ddgst": false 00:12:48.380 }, 00:12:48.380 "method": "bdev_nvme_attach_controller" 00:12:48.380 }' 00:12:48.380 [2024-11-05 16:36:55.420004] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:12:48.380 [2024-11-05 16:36:55.420057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2979055 ] 00:12:48.641 [2024-11-05 16:36:55.491023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.641 [2024-11-05 16:36:55.527279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.907 Running I/O for 10 seconds... 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:49.170 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=898 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 898 -ge 100 ']' 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.432 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.432 [2024-11-05 16:36:56.280958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.432 [2024-11-05 16:36:56.281083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.281176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf130 is same with the state(6) to be set 00:12:49.433 [2024-11-05 16:36:56.284430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.284985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.284992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.285001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.285009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.285018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.433 [2024-11-05 16:36:56.285026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.433 [2024-11-05 16:36:56.285036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.434 [2024-11-05 16:36:56.285513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.434 [2024-11-05 16:36:56.285574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:49.434 [2024-11-05 16:36:56.285665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.434 [2024-11-05 16:36:56.285677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.434 [2024-11-05 16:36:56.285693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.434 [2024-11-05 16:36:56.285708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.434 [2024-11-05 16:36:56.285723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.434 [2024-11-05 16:36:56.285732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a62000 is same with the state(6) to be set 00:12:49.434 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.434 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.434 [2024-11-05 16:36:56.286914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:49.434 task offset: 124416 on job bdev=Nvme0n1 fails 00:12:49.434 00:12:49.434 Latency(us) 00:12:49.434 [2024-11-05T15:36:56.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.434 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:49.434 Job: Nvme0n1 ended in about 0.57 seconds with error 00:12:49.434 Verification LBA range: start 0x0 length 0x400 00:12:49.435 Nvme0n1 : 0.57 1681.19 105.07 112.08 0.00 34797.90 2061.65 32549.55 00:12:49.435 [2024-11-05T15:36:56.498Z] =================================================================================================================== 00:12:49.435 [2024-11-05T15:36:56.498Z] Total : 1681.19 105.07 112.08 0.00 34797.90 2061.65 32549.55 00:12:49.435 [2024-11-05 16:36:56.288908] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.435 [2024-11-05 16:36:56.288929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a62000 (9): Bad file descriptor 00:12:49.435 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.435 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:49.435 [2024-11-05 16:36:56.421969] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2979055 00:12:50.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2979055) - No such process 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:50.376 { 00:12:50.376 "params": { 00:12:50.376 "name": "Nvme$subsystem", 00:12:50.376 "trtype": "$TEST_TRANSPORT", 00:12:50.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.376 "adrfam": "ipv4", 00:12:50.376 "trsvcid": "$NVMF_PORT", 00:12:50.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.376 "hdgst": ${hdgst:-false}, 00:12:50.376 "ddgst": ${ddgst:-false} 00:12:50.376 }, 00:12:50.376 "method": "bdev_nvme_attach_controller" 00:12:50.376 } 00:12:50.376 EOF 00:12:50.376 )") 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:12:50.376 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:50.376 "params": { 00:12:50.376 "name": "Nvme0", 00:12:50.376 "trtype": "tcp", 00:12:50.376 "traddr": "10.0.0.2", 00:12:50.376 "adrfam": "ipv4", 00:12:50.376 "trsvcid": "4420", 00:12:50.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:50.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:50.376 "hdgst": false, 00:12:50.376 "ddgst": false 00:12:50.376 }, 00:12:50.376 "method": "bdev_nvme_attach_controller" 00:12:50.376 }' 00:12:50.376 [2024-11-05 16:36:57.357981] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:12:50.376 [2024-11-05 16:36:57.358036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2979541 ] 00:12:50.376 [2024-11-05 16:36:57.427711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.636 [2024-11-05 16:36:57.463191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.636 Running I/O for 1 seconds... 00:12:51.575 1600.00 IOPS, 100.00 MiB/s 00:12:51.575 Latency(us) 00:12:51.575 [2024-11-05T15:36:58.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.575 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:51.575 Verification LBA range: start 0x0 length 0x400 00:12:51.575 Nvme0n1 : 1.01 1646.10 102.88 0.00 0.00 38198.45 7045.12 32768.00 00:12:51.575 [2024-11-05T15:36:58.638Z] =================================================================================================================== 00:12:51.575 [2024-11-05T15:36:58.638Z] Total : 1646.10 102.88 0.00 0.00 38198.45 7045.12 32768.00 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:51.836 rmmod nvme_tcp 00:12:51.836 rmmod nvme_fabrics 00:12:51.836 rmmod nvme_keyring 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 2978876 ']' 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 2978876 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2978876 ']' 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2978876 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2978876 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2978876' 00:12:51.836 killing process with pid 2978876 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2978876 00:12:51.836 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2978876 00:12:52.096 [2024-11-05 16:36:58.967541] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:52.096 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:52.096 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:12:52.096 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:12:52.096 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:52.097 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:52.097 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:52.097 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:54.036 00:12:54.036 real 0m14.453s 00:12:54.036 user 0m22.364s 00:12:54.036 sys 0m6.630s 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.036 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.036 ************************************ 00:12:54.036 END TEST nvmf_host_management 00:12:54.036 ************************************ 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.297 ************************************ 00:12:54.297 START TEST nvmf_lvol 00:12:54.297 ************************************ 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:54.297 * Looking for test storage... 00:12:54.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.297 --rc genhtml_branch_coverage=1 00:12:54.297 --rc genhtml_function_coverage=1 00:12:54.297 --rc genhtml_legend=1 00:12:54.297 --rc geninfo_all_blocks=1 00:12:54.297 --rc geninfo_unexecuted_blocks=1 00:12:54.297 00:12:54.297 ' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.297 --rc genhtml_branch_coverage=1 00:12:54.297 --rc genhtml_function_coverage=1 00:12:54.297 --rc genhtml_legend=1 00:12:54.297 --rc geninfo_all_blocks=1 00:12:54.297 --rc geninfo_unexecuted_blocks=1 00:12:54.297 00:12:54.297 ' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.297 --rc genhtml_branch_coverage=1 00:12:54.297 --rc genhtml_function_coverage=1 00:12:54.297 --rc genhtml_legend=1 00:12:54.297 --rc geninfo_all_blocks=1 00:12:54.297 --rc geninfo_unexecuted_blocks=1 00:12:54.297 00:12:54.297 ' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.297 --rc genhtml_branch_coverage=1 00:12:54.297 --rc genhtml_function_coverage=1 00:12:54.297 --rc genhtml_legend=1 00:12:54.297 --rc geninfo_all_blocks=1 00:12:54.297 --rc geninfo_unexecuted_blocks=1 00:12:54.297 00:12:54.297 ' 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.297 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:54.558 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:54.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:12:54.559 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.701 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:02.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:02.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:02.702 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:02.703 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:02.703 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:02.703 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:02.704 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:02.706 10.0.0.1 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:02.706 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:02.707 10.0.0.2 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:02.707 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:02.708 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:02.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.564 ms 00:13:02.711 00:13:02.711 --- 10.0.0.1 ping statistics --- 00:13:02.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.711 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:02.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:13:02.711 00:13:02.711 --- 10.0.0.2 ping statistics --- 00:13:02.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.711 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:02.711 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:13:02.712 ' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=2984083 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 2984083 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2984083 ']' 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:02.712 16:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.712 [2024-11-05 16:37:08.909685] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:13:02.712 [2024-11-05 16:37:08.909770] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.712 [2024-11-05 16:37:08.995471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.712 [2024-11-05 16:37:09.036856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.712 [2024-11-05 16:37:09.036893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.712 [2024-11-05 16:37:09.036901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.712 [2024-11-05 16:37:09.036908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.712 [2024-11-05 16:37:09.036914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.712 [2024-11-05 16:37:09.038331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.712 [2024-11-05 16:37:09.038448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.712 [2024-11-05 16:37:09.038451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.712 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:02.974 [2024-11-05 16:37:09.909556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.974 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:03.235 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:03.235 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:03.495 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:03.495 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:03.495 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:03.756 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c1ab5c8c-0704-448f-aed9-4118dbb1ecd1 00:13:03.756 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1ab5c8c-0704-448f-aed9-4118dbb1ecd1 lvol 20 00:13:04.017 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=70300f9a-3614-4710-a6aa-9603625c5b8a 00:13:04.017 16:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:04.017 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 70300f9a-3614-4710-a6aa-9603625c5b8a 00:13:04.278 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:04.538 [2024-11-05 16:37:11.409821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.538 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.799 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2984683 00:13:04.799 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:04.799 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:05.741 16:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 70300f9a-3614-4710-a6aa-9603625c5b8a MY_SNAPSHOT 00:13:06.001 16:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d7c1d4a5-d2c5-41ab-8813-09a6323622ab 00:13:06.001 16:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 70300f9a-3614-4710-a6aa-9603625c5b8a 30 00:13:06.001 16:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d7c1d4a5-d2c5-41ab-8813-09a6323622ab MY_CLONE 00:13:06.261 16:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6e69f43a-ae04-4733-a51b-31f9b9ff0fcc 00:13:06.261 16:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6e69f43a-ae04-4733-a51b-31f9b9ff0fcc 00:13:06.831 16:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2984683 00:13:14.966 Initializing NVMe Controllers 00:13:14.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:14.966 Controller IO queue size 128, less than required. 00:13:14.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:14.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:14.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:14.966 Initialization complete. Launching workers. 00:13:14.966 ======================================================== 00:13:14.967 Latency(us) 00:13:14.967 Device Information : IOPS MiB/s Average min max 00:13:14.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12104.20 47.28 10577.52 1213.35 48685.72 00:13:14.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17718.30 69.21 7223.19 685.35 37956.07 00:13:14.967 ======================================================== 00:13:14.967 Total : 29822.50 116.49 8584.63 685.35 48685.72 00:13:14.967 00:13:14.967 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:15.227 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 70300f9a-3614-4710-a6aa-9603625c5b8a 00:13:15.488 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1ab5c8c-0704-448f-aed9-4118dbb1ecd1 00:13:15.488 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:15.748 rmmod nvme_tcp 00:13:15.748 rmmod nvme_fabrics 00:13:15.748 rmmod nvme_keyring 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 2984083 ']' 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 2984083 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2984083 ']' 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2984083 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2984083 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2984083' 00:13:15.748 killing process with pid 2984083 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2984083 00:13:15.748 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2984083 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:16.009 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:13:17.923 00:13:17.923 real 0m23.764s 00:13:17.923 user 1m4.300s 00:13:17.923 sys 0m8.571s 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.923 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:17.923 ************************************ 00:13:17.924 END TEST nvmf_lvol 00:13:17.924 ************************************ 00:13:17.924 16:37:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:17.924 16:37:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:17.924 16:37:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.924 16:37:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:18.185 ************************************ 00:13:18.185 START TEST nvmf_lvs_grow 00:13:18.185 ************************************ 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:18.185 * Looking for test storage... 00:13:18.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.185 --rc genhtml_branch_coverage=1 00:13:18.185 --rc genhtml_function_coverage=1 00:13:18.185 --rc genhtml_legend=1 00:13:18.185 --rc geninfo_all_blocks=1 00:13:18.185 --rc geninfo_unexecuted_blocks=1 00:13:18.185 00:13:18.185 ' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.185 --rc genhtml_branch_coverage=1 00:13:18.185 --rc genhtml_function_coverage=1 00:13:18.185 --rc genhtml_legend=1 00:13:18.185 --rc geninfo_all_blocks=1 00:13:18.185 --rc geninfo_unexecuted_blocks=1 00:13:18.185 00:13:18.185 ' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.185 --rc genhtml_branch_coverage=1 00:13:18.185 --rc genhtml_function_coverage=1 00:13:18.185 --rc genhtml_legend=1 00:13:18.185 --rc geninfo_all_blocks=1 00:13:18.185 --rc geninfo_unexecuted_blocks=1 00:13:18.185 00:13:18.185 ' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.185 --rc genhtml_branch_coverage=1 00:13:18.185 --rc genhtml_function_coverage=1 00:13:18.185 --rc genhtml_legend=1 00:13:18.185 --rc geninfo_all_blocks=1 00:13:18.185 --rc geninfo_unexecuted_blocks=1 00:13:18.185 00:13:18.185 ' 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.185 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:18.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:13:18.186 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:24.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:24.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:24.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:24.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:24.894 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:24.895 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:25.157 10.0.0.1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:25.157 10.0.0.2 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:25.157 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:25.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.694 ms 00:13:25.419 00:13:25.419 --- 10.0.0.1 ping statistics --- 00:13:25.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.419 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:25.419 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:25.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:13:25.419 00:13:25.420 --- 10.0.0.2 ping statistics --- 00:13:25.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.420 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:13:25.420 ' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=2991095 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 2991095 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2991095 ']' 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:25.420 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:25.420 [2024-11-05 16:37:32.449850] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:13:25.420 [2024-11-05 16:37:32.449918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.681 [2024-11-05 16:37:32.532583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.681 [2024-11-05 16:37:32.573412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.681 [2024-11-05 16:37:32.573448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.681 [2024-11-05 16:37:32.573456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.681 [2024-11-05 16:37:32.573464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.681 [2024-11-05 16:37:32.573470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.681 [2024-11-05 16:37:32.574070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.252 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:26.513 [2024-11-05 16:37:33.435680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:26.513 ************************************ 00:13:26.513 START TEST lvs_grow_clean 00:13:26.513 ************************************ 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:26.513 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:26.773 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:26.773 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:27.034 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:27.034 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:27.034 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:27.034 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:27.034 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:27.034 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b44a2c4a-be01-45f7-ba20-e78002de9902 lvol 150 00:13:27.294 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=26f6b7b5-70a1-461b-a2d2-81258ad083df 00:13:27.294 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:27.294 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:27.555 [2024-11-05 16:37:34.381909] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:27.555 [2024-11-05 16:37:34.381958] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:27.555 true 00:13:27.555 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:27.555 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:27.555 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:27.555 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:27.815 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26f6b7b5-70a1-461b-a2d2-81258ad083df 00:13:27.815 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:28.075 [2024-11-05 16:37:35.027930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.075 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2991792 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2991792 /var/tmp/bdevperf.sock 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2991792 ']' 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.336 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:28.336 [2024-11-05 16:37:35.278592] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:13:28.336 [2024-11-05 16:37:35.278646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991792 ] 00:13:28.336 [2024-11-05 16:37:35.364917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.596 [2024-11-05 16:37:35.401925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.168 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.168 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:13:29.168 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:29.429 Nvme0n1 00:13:29.429 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:29.689 [ 00:13:29.689 { 00:13:29.689 "name": "Nvme0n1", 00:13:29.689 "aliases": [ 00:13:29.689 "26f6b7b5-70a1-461b-a2d2-81258ad083df" 00:13:29.689 ], 00:13:29.689 "product_name": "NVMe disk", 00:13:29.689 "block_size": 4096, 00:13:29.689 "num_blocks": 38912, 00:13:29.689 "uuid": "26f6b7b5-70a1-461b-a2d2-81258ad083df", 00:13:29.689 "numa_id": 0, 00:13:29.689 "assigned_rate_limits": { 00:13:29.689 "rw_ios_per_sec": 0, 00:13:29.689 "rw_mbytes_per_sec": 0, 00:13:29.689 "r_mbytes_per_sec": 0, 00:13:29.689 "w_mbytes_per_sec": 0 00:13:29.690 }, 00:13:29.690 "claimed": false, 00:13:29.690 "zoned": false, 00:13:29.690 "supported_io_types": { 00:13:29.690 "read": true, 00:13:29.690 "write": true, 00:13:29.690 "unmap": true, 00:13:29.690 "flush": true, 00:13:29.690 "reset": true, 00:13:29.690 "nvme_admin": true, 00:13:29.690 "nvme_io": true, 00:13:29.690 "nvme_io_md": false, 00:13:29.690 "write_zeroes": true, 00:13:29.690 "zcopy": false, 00:13:29.690 "get_zone_info": false, 00:13:29.690 "zone_management": false, 00:13:29.690 "zone_append": false, 00:13:29.690 "compare": true, 00:13:29.690 "compare_and_write": true, 00:13:29.690 "abort": true, 00:13:29.690 "seek_hole": false, 00:13:29.690 "seek_data": false, 00:13:29.690 "copy": true, 00:13:29.690 "nvme_iov_md": false 00:13:29.690 }, 00:13:29.690 "memory_domains": [ 00:13:29.690 { 00:13:29.690 "dma_device_id": "system", 00:13:29.690 "dma_device_type": 1 00:13:29.690 } 00:13:29.690 ], 00:13:29.690 "driver_specific": { 00:13:29.690 "nvme": [ 00:13:29.690 { 00:13:29.690 "trid": { 00:13:29.690 "trtype": "TCP", 00:13:29.690 "adrfam": "IPv4", 00:13:29.690 "traddr": "10.0.0.2", 00:13:29.690 "trsvcid": "4420", 00:13:29.690 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:29.690 }, 00:13:29.690 "ctrlr_data": { 00:13:29.690 "cntlid": 1, 00:13:29.690 "vendor_id": "0x8086", 00:13:29.690 "model_number": "SPDK bdev Controller", 00:13:29.690 "serial_number": "SPDK0", 00:13:29.690 "firmware_revision": "25.01", 00:13:29.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:29.690 "oacs": { 00:13:29.690 "security": 0, 00:13:29.690 "format": 0, 00:13:29.690 "firmware": 0, 00:13:29.690 "ns_manage": 0 00:13:29.690 }, 00:13:29.690 "multi_ctrlr": true, 00:13:29.690 "ana_reporting": false 00:13:29.690 }, 00:13:29.690 "vs": { 00:13:29.690 "nvme_version": "1.3" 00:13:29.690 }, 00:13:29.690 "ns_data": { 00:13:29.690 "id": 1, 00:13:29.690 "can_share": true 00:13:29.690 } 00:13:29.690 } 00:13:29.690 ], 00:13:29.690 "mp_policy": "active_passive" 00:13:29.690 } 00:13:29.690 } 00:13:29.690 ] 00:13:29.690 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2992127 00:13:29.690 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:29.690 16:37:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:29.690 Running I/O for 10 seconds... 00:13:30.634 Latency(us) 00:13:30.634 [2024-11-05T15:37:37.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.634 Nvme0n1 : 1.00 17908.00 69.95 0.00 0.00 0.00 0.00 0.00 00:13:30.634 [2024-11-05T15:37:37.697Z] =================================================================================================================== 00:13:30.634 [2024-11-05T15:37:37.697Z] Total : 17908.00 69.95 0.00 0.00 0.00 0.00 0.00 00:13:30.634 00:13:31.577 16:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:31.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.577 Nvme0n1 : 2.00 17969.50 70.19 0.00 0.00 0.00 0.00 0.00 00:13:31.577 [2024-11-05T15:37:38.640Z] =================================================================================================================== 00:13:31.577 [2024-11-05T15:37:38.640Z] Total : 17969.50 70.19 0.00 0.00 0.00 0.00 0.00 00:13:31.577 00:13:31.838 true 00:13:31.838 16:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:31.838 16:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:31.838 16:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:31.838 16:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:31.838 16:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2992127 00:13:32.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.780 Nvme0n1 : 3.00 18006.33 70.34 0.00 0.00 0.00 0.00 0.00 00:13:32.780 [2024-11-05T15:37:39.843Z] =================================================================================================================== 00:13:32.780 [2024-11-05T15:37:39.843Z] Total : 18006.33 70.34 0.00 0.00 0.00 0.00 0.00 00:13:32.780 00:13:33.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.722 Nvme0n1 : 4.00 18039.00 70.46 0.00 0.00 0.00 0.00 0.00 00:13:33.722 [2024-11-05T15:37:40.785Z] =================================================================================================================== 00:13:33.722 [2024-11-05T15:37:40.785Z] Total : 18039.00 70.46 0.00 0.00 0.00 0.00 0.00 00:13:33.722 00:13:34.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.665 Nvme0n1 : 5.00 18055.40 70.53 0.00 0.00 0.00 0.00 0.00 00:13:34.665 [2024-11-05T15:37:41.728Z] =================================================================================================================== 00:13:34.665 [2024-11-05T15:37:41.728Z] Total : 18055.40 70.53 0.00 0.00 0.00 0.00 0.00 00:13:34.665 00:13:35.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.608 Nvme0n1 : 6.00 18087.00 70.65 0.00 0.00 0.00 0.00 0.00 00:13:35.608 [2024-11-05T15:37:42.671Z] =================================================================================================================== 00:13:35.608 [2024-11-05T15:37:42.671Z] Total : 18087.00 70.65 0.00 0.00 0.00 0.00 0.00 00:13:35.608 00:13:36.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.552 Nvme0n1 : 7.00 18100.14 70.70 0.00 0.00 0.00 0.00 0.00 00:13:36.552 [2024-11-05T15:37:43.615Z] =================================================================================================================== 00:13:36.552 [2024-11-05T15:37:43.615Z] Total : 18100.14 70.70 0.00 0.00 0.00 0.00 0.00 00:13:36.552 00:13:37.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.939 Nvme0n1 : 8.00 18122.62 70.79 0.00 0.00 0.00 0.00 0.00 00:13:37.939 [2024-11-05T15:37:45.002Z] =================================================================================================================== 00:13:37.939 [2024-11-05T15:37:45.002Z] Total : 18122.62 70.79 0.00 0.00 0.00 0.00 0.00 00:13:37.939 00:13:38.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.882 Nvme0n1 : 9.00 18137.89 70.85 0.00 0.00 0.00 0.00 0.00 00:13:38.882 [2024-11-05T15:37:45.945Z] =================================================================================================================== 00:13:38.883 [2024-11-05T15:37:45.946Z] Total : 18137.89 70.85 0.00 0.00 0.00 0.00 0.00 00:13:38.883 00:13:39.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.827 Nvme0n1 : 10.00 18143.50 70.87 0.00 0.00 0.00 0.00 0.00 00:13:39.827 [2024-11-05T15:37:46.890Z] =================================================================================================================== 00:13:39.827 [2024-11-05T15:37:46.890Z] Total : 18143.50 70.87 0.00 0.00 0.00 0.00 0.00 00:13:39.827 00:13:39.827 00:13:39.827 Latency(us) 00:13:39.827 [2024-11-05T15:37:46.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.827 Nvme0n1 : 10.00 18147.89 70.89 0.00 0.00 7050.52 2102.61 13161.81 00:13:39.827 [2024-11-05T15:37:46.890Z] =================================================================================================================== 00:13:39.827 [2024-11-05T15:37:46.890Z] Total : 18147.89 70.89 0.00 0.00 7050.52 2102.61 13161.81 00:13:39.827 { 00:13:39.827 "results": [ 00:13:39.827 { 00:13:39.827 "job": "Nvme0n1", 00:13:39.827 "core_mask": "0x2", 00:13:39.827 "workload": "randwrite", 00:13:39.827 "status": "finished", 00:13:39.827 "queue_depth": 128, 00:13:39.827 "io_size": 4096, 00:13:39.827 "runtime": 10.004632, 00:13:39.827 "iops": 18147.893895547582, 00:13:39.827 "mibps": 70.89021052948274, 00:13:39.827 "io_failed": 0, 00:13:39.827 "io_timeout": 0, 00:13:39.827 "avg_latency_us": 7050.52258797222, 00:13:39.827 "min_latency_us": 2102.6133333333332, 00:13:39.827 "max_latency_us": 13161.813333333334 00:13:39.827 } 00:13:39.827 ], 00:13:39.827 "core_count": 1 00:13:39.827 } 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2991792 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2991792 ']' 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2991792 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2991792 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2991792' 00:13:39.827 killing process with pid 2991792 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2991792 00:13:39.827 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.827 00:13:39.827 Latency(us) 00:13:39.827 [2024-11-05T15:37:46.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.827 [2024-11-05T15:37:46.890Z] =================================================================================================================== 00:13:39.827 [2024-11-05T15:37:46.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2991792 00:13:39.827 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:40.089 16:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:40.350 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:40.350 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:40.350 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:40.350 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:40.350 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:40.611 [2024-11-05 16:37:47.533371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:40.611 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:40.872 request: 00:13:40.872 { 00:13:40.872 "uuid": "b44a2c4a-be01-45f7-ba20-e78002de9902", 00:13:40.872 "method": "bdev_lvol_get_lvstores", 00:13:40.872 "req_id": 1 00:13:40.872 } 00:13:40.872 Got JSON-RPC error response 00:13:40.872 response: 00:13:40.872 { 00:13:40.872 "code": -19, 00:13:40.872 "message": "No such device" 00:13:40.872 } 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:40.872 aio_bdev 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 26f6b7b5-70a1-461b-a2d2-81258ad083df 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=26f6b7b5-70a1-461b-a2d2-81258ad083df 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:40.872 16:37:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:41.133 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26f6b7b5-70a1-461b-a2d2-81258ad083df -t 2000 00:13:41.394 [ 00:13:41.394 { 00:13:41.394 "name": "26f6b7b5-70a1-461b-a2d2-81258ad083df", 00:13:41.394 "aliases": [ 00:13:41.394 "lvs/lvol" 00:13:41.394 ], 00:13:41.394 "product_name": "Logical Volume", 00:13:41.394 "block_size": 4096, 00:13:41.394 "num_blocks": 38912, 00:13:41.394 "uuid": "26f6b7b5-70a1-461b-a2d2-81258ad083df", 00:13:41.394 "assigned_rate_limits": { 00:13:41.394 "rw_ios_per_sec": 0, 00:13:41.394 "rw_mbytes_per_sec": 0, 00:13:41.394 "r_mbytes_per_sec": 0, 00:13:41.394 "w_mbytes_per_sec": 0 00:13:41.394 }, 00:13:41.394 "claimed": false, 00:13:41.394 "zoned": false, 00:13:41.394 "supported_io_types": { 00:13:41.394 "read": true, 00:13:41.394 "write": true, 00:13:41.394 "unmap": true, 00:13:41.394 "flush": false, 00:13:41.394 "reset": true, 00:13:41.394 "nvme_admin": false, 00:13:41.394 "nvme_io": false, 00:13:41.394 "nvme_io_md": false, 00:13:41.394 "write_zeroes": true, 00:13:41.394 "zcopy": false, 00:13:41.394 "get_zone_info": false, 00:13:41.394 "zone_management": false, 00:13:41.394 "zone_append": false, 00:13:41.394 "compare": false, 00:13:41.394 "compare_and_write": false, 00:13:41.394 "abort": false, 00:13:41.394 "seek_hole": true, 00:13:41.394 "seek_data": true, 00:13:41.394 "copy": false, 00:13:41.394 "nvme_iov_md": false 00:13:41.394 }, 00:13:41.394 "driver_specific": { 00:13:41.394 "lvol": { 00:13:41.394 "lvol_store_uuid": "b44a2c4a-be01-45f7-ba20-e78002de9902", 00:13:41.394 "base_bdev": "aio_bdev", 00:13:41.394 "thin_provision": false, 00:13:41.394 "num_allocated_clusters": 38, 00:13:41.394 "snapshot": false, 00:13:41.394 "clone": false, 00:13:41.394 "esnap_clone": false 00:13:41.394 } 00:13:41.394 } 00:13:41.394 } 00:13:41.394 ] 00:13:41.394 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:13:41.394 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:41.394 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:41.394 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:41.394 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:41.394 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:41.655 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:41.655 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 26f6b7b5-70a1-461b-a2d2-81258ad083df 00:13:41.924 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b44a2c4a-be01-45f7-ba20-e78002de9902 00:13:41.924 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.185 00:13:42.185 real 0m15.599s 00:13:42.185 user 0m15.355s 00:13:42.185 sys 0m1.320s 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:42.185 ************************************ 00:13:42.185 END TEST lvs_grow_clean 00:13:42.185 ************************************ 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:42.185 ************************************ 00:13:42.185 START TEST lvs_grow_dirty 00:13:42.185 ************************************ 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:42.185 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:42.186 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:42.186 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:42.186 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:42.186 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.186 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.186 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:42.446 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:42.446 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:42.708 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:42.708 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:42.708 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:42.708 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:42.708 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:42.708 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d lvol 150 00:13:42.968 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:42.968 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.968 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:43.229 [2024-11-05 16:37:50.054909] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:43.229 [2024-11-05 16:37:50.054962] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:43.229 true 00:13:43.229 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:43.229 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:43.229 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:43.229 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:43.490 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:43.751 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:43.751 [2024-11-05 16:37:50.708920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.751 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2994887 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2994887 /var/tmp/bdevperf.sock 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2994887 ']' 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:44.012 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 [2024-11-05 16:37:50.952081] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:13:44.012 [2024-11-05 16:37:50.952142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994887 ] 00:13:44.012 [2024-11-05 16:37:51.037248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.012 [2024-11-05 16:37:51.067479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.955 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:44.955 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:13:44.955 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:44.955 Nvme0n1 00:13:45.216 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:45.216 [ 00:13:45.216 { 00:13:45.216 "name": "Nvme0n1", 00:13:45.216 "aliases": [ 00:13:45.216 "07a45610-7bbd-4fc2-b006-68e31957fb66" 00:13:45.216 ], 00:13:45.216 "product_name": "NVMe disk", 00:13:45.216 "block_size": 4096, 00:13:45.216 "num_blocks": 38912, 00:13:45.216 "uuid": "07a45610-7bbd-4fc2-b006-68e31957fb66", 00:13:45.216 "numa_id": 0, 00:13:45.216 "assigned_rate_limits": { 00:13:45.216 "rw_ios_per_sec": 0, 00:13:45.216 "rw_mbytes_per_sec": 0, 00:13:45.216 "r_mbytes_per_sec": 0, 00:13:45.216 "w_mbytes_per_sec": 0 00:13:45.216 }, 00:13:45.216 "claimed": false, 00:13:45.216 "zoned": false, 00:13:45.216 "supported_io_types": { 00:13:45.216 "read": true, 00:13:45.216 "write": true, 00:13:45.216 "unmap": true, 00:13:45.216 "flush": true, 00:13:45.216 "reset": true, 00:13:45.216 "nvme_admin": true, 00:13:45.216 "nvme_io": true, 00:13:45.216 "nvme_io_md": false, 00:13:45.216 "write_zeroes": true, 00:13:45.216 "zcopy": false, 00:13:45.216 "get_zone_info": false, 00:13:45.216 "zone_management": false, 00:13:45.216 "zone_append": false, 00:13:45.216 "compare": true, 00:13:45.216 "compare_and_write": true, 00:13:45.216 "abort": true, 00:13:45.216 "seek_hole": false, 00:13:45.216 "seek_data": false, 00:13:45.216 "copy": true, 00:13:45.216 "nvme_iov_md": false 00:13:45.216 }, 00:13:45.216 "memory_domains": [ 00:13:45.216 { 00:13:45.216 "dma_device_id": "system", 00:13:45.216 "dma_device_type": 1 00:13:45.216 } 00:13:45.216 ], 00:13:45.216 "driver_specific": { 00:13:45.216 "nvme": [ 00:13:45.216 { 00:13:45.216 "trid": { 00:13:45.216 "trtype": "TCP", 00:13:45.216 "adrfam": "IPv4", 00:13:45.216 "traddr": "10.0.0.2", 00:13:45.216 "trsvcid": "4420", 00:13:45.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:45.216 }, 00:13:45.216 "ctrlr_data": { 00:13:45.216 "cntlid": 1, 00:13:45.216 "vendor_id": "0x8086", 00:13:45.216 "model_number": "SPDK bdev Controller", 00:13:45.216 "serial_number": "SPDK0", 00:13:45.216 "firmware_revision": "25.01", 00:13:45.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:45.216 "oacs": { 00:13:45.216 "security": 0, 00:13:45.216 "format": 0, 00:13:45.216 "firmware": 0, 00:13:45.216 "ns_manage": 0 00:13:45.216 }, 00:13:45.216 "multi_ctrlr": true, 00:13:45.216 "ana_reporting": false 00:13:45.216 }, 00:13:45.216 "vs": { 00:13:45.216 "nvme_version": "1.3" 00:13:45.216 }, 00:13:45.216 "ns_data": { 00:13:45.216 "id": 1, 00:13:45.216 "can_share": true 00:13:45.216 } 00:13:45.216 } 00:13:45.216 ], 00:13:45.216 "mp_policy": "active_passive" 00:13:45.216 } 00:13:45.216 } 00:13:45.216 ] 00:13:45.216 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2995224 00:13:45.216 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:45.216 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:45.477 Running I/O for 10 seconds... 00:13:46.421 Latency(us) 00:13:46.421 [2024-11-05T15:37:53.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.421 Nvme0n1 : 1.00 17813.00 69.58 0.00 0.00 0.00 0.00 0.00 00:13:46.421 [2024-11-05T15:37:53.484Z] =================================================================================================================== 00:13:46.421 [2024-11-05T15:37:53.484Z] Total : 17813.00 69.58 0.00 0.00 0.00 0.00 0.00 00:13:46.421 00:13:47.364 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:47.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.364 Nvme0n1 : 2.00 17943.00 70.09 0.00 0.00 0.00 0.00 0.00 00:13:47.364 [2024-11-05T15:37:54.427Z] =================================================================================================================== 00:13:47.364 [2024-11-05T15:37:54.427Z] Total : 17943.00 70.09 0.00 0.00 0.00 0.00 0.00 00:13:47.364 00:13:47.364 true 00:13:47.364 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:47.364 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:47.625 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:47.625 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:47.625 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2995224 00:13:48.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.566 Nvme0n1 : 3.00 17983.00 70.25 0.00 0.00 0.00 0.00 0.00 00:13:48.566 [2024-11-05T15:37:55.629Z] =================================================================================================================== 00:13:48.566 [2024-11-05T15:37:55.629Z] Total : 17983.00 70.25 0.00 0.00 0.00 0.00 0.00 00:13:48.566 00:13:49.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.508 Nvme0n1 : 4.00 18017.25 70.38 0.00 0.00 0.00 0.00 0.00 00:13:49.508 [2024-11-05T15:37:56.571Z] =================================================================================================================== 00:13:49.508 [2024-11-05T15:37:56.571Z] Total : 18017.25 70.38 0.00 0.00 0.00 0.00 0.00 00:13:49.508 00:13:50.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.451 Nvme0n1 : 5.00 18045.80 70.49 0.00 0.00 0.00 0.00 0.00 00:13:50.451 [2024-11-05T15:37:57.514Z] =================================================================================================================== 00:13:50.451 [2024-11-05T15:37:57.514Z] Total : 18045.80 70.49 0.00 0.00 0.00 0.00 0.00 00:13:50.451 00:13:51.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.393 Nvme0n1 : 6.00 18061.00 70.55 0.00 0.00 0.00 0.00 0.00 00:13:51.393 [2024-11-05T15:37:58.456Z] =================================================================================================================== 00:13:51.393 [2024-11-05T15:37:58.456Z] Total : 18061.00 70.55 0.00 0.00 0.00 0.00 0.00 00:13:51.393 00:13:52.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.334 Nvme0n1 : 7.00 18082.43 70.63 0.00 0.00 0.00 0.00 0.00 00:13:52.334 [2024-11-05T15:37:59.397Z] =================================================================================================================== 00:13:52.334 [2024-11-05T15:37:59.397Z] Total : 18082.43 70.63 0.00 0.00 0.00 0.00 0.00 00:13:52.334 00:13:53.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.275 Nvme0n1 : 8.00 18085.88 70.65 0.00 0.00 0.00 0.00 0.00 00:13:53.275 [2024-11-05T15:38:00.338Z] =================================================================================================================== 00:13:53.275 [2024-11-05T15:38:00.338Z] Total : 18085.88 70.65 0.00 0.00 0.00 0.00 0.00 00:13:53.275 00:13:54.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.658 Nvme0n1 : 9.00 18101.67 70.71 0.00 0.00 0.00 0.00 0.00 00:13:54.658 [2024-11-05T15:38:01.721Z] =================================================================================================================== 00:13:54.658 [2024-11-05T15:38:01.721Z] Total : 18101.67 70.71 0.00 0.00 0.00 0.00 0.00 00:13:54.658 00:13:55.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.600 Nvme0n1 : 10.00 18113.70 70.76 0.00 0.00 0.00 0.00 0.00 00:13:55.600 [2024-11-05T15:38:02.663Z] =================================================================================================================== 00:13:55.600 [2024-11-05T15:38:02.663Z] Total : 18113.70 70.76 0.00 0.00 0.00 0.00 0.00 00:13:55.600 00:13:55.600 00:13:55.600 Latency(us) 00:13:55.600 [2024-11-05T15:38:02.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.600 Nvme0n1 : 10.01 18114.78 70.76 0.00 0.00 7064.44 4232.53 13052.59 00:13:55.600 [2024-11-05T15:38:02.663Z] =================================================================================================================== 00:13:55.600 [2024-11-05T15:38:02.663Z] Total : 18114.78 70.76 0.00 0.00 7064.44 4232.53 13052.59 00:13:55.600 { 00:13:55.600 "results": [ 00:13:55.600 { 00:13:55.600 "job": "Nvme0n1", 00:13:55.600 "core_mask": "0x2", 00:13:55.600 "workload": "randwrite", 00:13:55.600 "status": "finished", 00:13:55.600 "queue_depth": 128, 00:13:55.600 "io_size": 4096, 00:13:55.600 "runtime": 10.00647, 00:13:55.600 "iops": 18114.77973750983, 00:13:55.600 "mibps": 70.76085834964778, 00:13:55.600 "io_failed": 0, 00:13:55.600 "io_timeout": 0, 00:13:55.600 "avg_latency_us": 7064.4357024246265, 00:13:55.600 "min_latency_us": 4232.533333333334, 00:13:55.600 "max_latency_us": 13052.586666666666 00:13:55.600 } 00:13:55.600 ], 00:13:55.600 "core_count": 1 00:13:55.600 } 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2994887 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2994887 ']' 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2994887 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2994887 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2994887' 00:13:55.600 killing process with pid 2994887 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2994887 00:13:55.600 Received shutdown signal, test time was about 10.000000 seconds 00:13:55.600 00:13:55.600 Latency(us) 00:13:55.600 [2024-11-05T15:38:02.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.600 [2024-11-05T15:38:02.663Z] =================================================================================================================== 00:13:55.600 [2024-11-05T15:38:02.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2994887 00:13:55.600 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.861 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:55.861 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:55.861 16:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2991095 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2991095 00:13:56.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2991095 Killed "${NVMF_APP[@]}" "$@" 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=2997400 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 2997400 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2997400 ']' 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.123 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:56.384 [2024-11-05 16:38:03.203133] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:13:56.384 [2024-11-05 16:38:03.203219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.384 [2024-11-05 16:38:03.283249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.384 [2024-11-05 16:38:03.319452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.384 [2024-11-05 16:38:03.319487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.384 [2024-11-05 16:38:03.319499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.384 [2024-11-05 16:38:03.319506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.384 [2024-11-05 16:38:03.319512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.384 [2024-11-05 16:38:03.320086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.955 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:56.955 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:13:56.955 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:56.955 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.955 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:56.955 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.955 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:57.216 [2024-11-05 16:38:04.174481] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:57.216 [2024-11-05 16:38:04.174569] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:57.216 [2024-11-05 16:38:04.174599] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:57.216 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:57.476 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 07a45610-7bbd-4fc2-b006-68e31957fb66 -t 2000 00:13:57.476 [ 00:13:57.477 { 00:13:57.477 "name": "07a45610-7bbd-4fc2-b006-68e31957fb66", 00:13:57.477 "aliases": [ 00:13:57.477 "lvs/lvol" 00:13:57.477 ], 00:13:57.477 "product_name": "Logical Volume", 00:13:57.477 "block_size": 4096, 00:13:57.477 "num_blocks": 38912, 00:13:57.477 "uuid": "07a45610-7bbd-4fc2-b006-68e31957fb66", 00:13:57.477 "assigned_rate_limits": { 00:13:57.477 "rw_ios_per_sec": 0, 00:13:57.477 "rw_mbytes_per_sec": 0, 00:13:57.477 "r_mbytes_per_sec": 0, 00:13:57.477 "w_mbytes_per_sec": 0 00:13:57.477 }, 00:13:57.477 "claimed": false, 00:13:57.477 "zoned": false, 00:13:57.477 "supported_io_types": { 00:13:57.477 "read": true, 00:13:57.477 "write": true, 00:13:57.477 "unmap": true, 00:13:57.477 "flush": false, 00:13:57.477 "reset": true, 00:13:57.477 "nvme_admin": false, 00:13:57.477 "nvme_io": false, 00:13:57.477 "nvme_io_md": false, 00:13:57.477 "write_zeroes": true, 00:13:57.477 "zcopy": false, 00:13:57.477 "get_zone_info": false, 00:13:57.477 "zone_management": false, 00:13:57.477 "zone_append": false, 00:13:57.477 "compare": false, 00:13:57.477 "compare_and_write": false, 00:13:57.477 "abort": false, 00:13:57.477 "seek_hole": true, 00:13:57.477 "seek_data": true, 00:13:57.477 "copy": false, 00:13:57.477 "nvme_iov_md": false 00:13:57.477 }, 00:13:57.477 "driver_specific": { 00:13:57.477 "lvol": { 00:13:57.477 "lvol_store_uuid": "d5f3adba-921f-4541-b3d0-5a327c4bcb6d", 00:13:57.477 "base_bdev": "aio_bdev", 00:13:57.477 "thin_provision": false, 00:13:57.477 "num_allocated_clusters": 38, 00:13:57.477 "snapshot": false, 00:13:57.477 "clone": false, 00:13:57.477 "esnap_clone": false 00:13:57.477 } 00:13:57.477 } 00:13:57.477 } 00:13:57.477 ] 00:13:57.477 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:13:57.738 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:57.738 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:57.738 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:57.738 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:57.738 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:57.999 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:57.999 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:57.999 [2024-11-05 16:38:05.018660] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:58.259 request: 00:13:58.259 { 00:13:58.259 "uuid": "d5f3adba-921f-4541-b3d0-5a327c4bcb6d", 00:13:58.259 "method": "bdev_lvol_get_lvstores", 00:13:58.259 "req_id": 1 00:13:58.259 } 00:13:58.259 Got JSON-RPC error response 00:13:58.259 response: 00:13:58.259 { 00:13:58.259 "code": -19, 00:13:58.259 "message": "No such device" 00:13:58.259 } 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.259 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.260 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:58.521 aio_bdev 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:58.521 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:58.782 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 07a45610-7bbd-4fc2-b006-68e31957fb66 -t 2000 00:13:58.782 [ 00:13:58.782 { 00:13:58.782 "name": "07a45610-7bbd-4fc2-b006-68e31957fb66", 00:13:58.782 "aliases": [ 00:13:58.782 "lvs/lvol" 00:13:58.782 ], 00:13:58.782 "product_name": "Logical Volume", 00:13:58.782 "block_size": 4096, 00:13:58.782 "num_blocks": 38912, 00:13:58.782 "uuid": "07a45610-7bbd-4fc2-b006-68e31957fb66", 00:13:58.782 "assigned_rate_limits": { 00:13:58.782 "rw_ios_per_sec": 0, 00:13:58.782 "rw_mbytes_per_sec": 0, 00:13:58.782 "r_mbytes_per_sec": 0, 00:13:58.782 "w_mbytes_per_sec": 0 00:13:58.782 }, 00:13:58.782 "claimed": false, 00:13:58.782 "zoned": false, 00:13:58.782 "supported_io_types": { 00:13:58.782 "read": true, 00:13:58.782 "write": true, 00:13:58.782 "unmap": true, 00:13:58.782 "flush": false, 00:13:58.782 "reset": true, 00:13:58.782 "nvme_admin": false, 00:13:58.782 "nvme_io": false, 00:13:58.782 "nvme_io_md": false, 00:13:58.782 "write_zeroes": true, 00:13:58.782 "zcopy": false, 00:13:58.782 "get_zone_info": false, 00:13:58.782 "zone_management": false, 00:13:58.782 "zone_append": false, 00:13:58.782 "compare": false, 00:13:58.782 "compare_and_write": false, 00:13:58.782 "abort": false, 00:13:58.782 "seek_hole": true, 00:13:58.782 "seek_data": true, 00:13:58.782 "copy": false, 00:13:58.782 "nvme_iov_md": false 00:13:58.782 }, 00:13:58.782 "driver_specific": { 00:13:58.782 "lvol": { 00:13:58.782 "lvol_store_uuid": "d5f3adba-921f-4541-b3d0-5a327c4bcb6d", 00:13:58.782 "base_bdev": "aio_bdev", 00:13:58.782 "thin_provision": false, 00:13:58.782 "num_allocated_clusters": 38, 00:13:58.782 "snapshot": false, 00:13:58.782 "clone": false, 00:13:58.782 "esnap_clone": false 00:13:58.782 } 00:13:58.782 } 00:13:58.782 } 00:13:58.782 ] 00:13:58.782 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:13:58.782 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:58.782 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:59.043 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:59.043 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:59.043 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:59.043 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:59.304 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 07a45610-7bbd-4fc2-b006-68e31957fb66 00:13:59.304 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5f3adba-921f-4541-b3d0-5a327c4bcb6d 00:13:59.564 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:59.564 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.827 00:13:59.827 real 0m17.457s 00:13:59.827 user 0m45.574s 00:13:59.827 sys 0m2.830s 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:59.827 ************************************ 00:13:59.827 END TEST lvs_grow_dirty 00:13:59.827 ************************************ 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:59.827 nvmf_trace.0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:59.827 rmmod nvme_tcp 00:13:59.827 rmmod nvme_fabrics 00:13:59.827 rmmod nvme_keyring 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 2997400 ']' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 2997400 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2997400 ']' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2997400 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2997400 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2997400' 00:13:59.827 killing process with pid 2997400 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2997400 00:13:59.827 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2997400 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:00.089 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:02.005 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:14:02.266 00:14:02.266 real 0m44.081s 00:14:02.266 user 1m7.054s 00:14:02.266 sys 0m10.103s 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:02.266 ************************************ 00:14:02.266 END TEST nvmf_lvs_grow 00:14:02.266 ************************************ 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:02.266 ************************************ 00:14:02.266 START TEST nvmf_bdev_io_wait 00:14:02.266 ************************************ 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:02.266 * Looking for test storage... 00:14:02.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:02.266 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.528 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.528 --rc genhtml_branch_coverage=1 00:14:02.528 --rc genhtml_function_coverage=1 00:14:02.529 --rc genhtml_legend=1 00:14:02.529 --rc geninfo_all_blocks=1 00:14:02.529 --rc geninfo_unexecuted_blocks=1 00:14:02.529 00:14:02.529 ' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:02.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.529 --rc genhtml_branch_coverage=1 00:14:02.529 --rc genhtml_function_coverage=1 00:14:02.529 --rc genhtml_legend=1 00:14:02.529 --rc geninfo_all_blocks=1 00:14:02.529 --rc geninfo_unexecuted_blocks=1 00:14:02.529 00:14:02.529 ' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:02.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.529 --rc genhtml_branch_coverage=1 00:14:02.529 --rc genhtml_function_coverage=1 00:14:02.529 --rc genhtml_legend=1 00:14:02.529 --rc geninfo_all_blocks=1 00:14:02.529 --rc geninfo_unexecuted_blocks=1 00:14:02.529 00:14:02.529 ' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:02.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.529 --rc genhtml_branch_coverage=1 00:14:02.529 --rc genhtml_function_coverage=1 00:14:02.529 --rc genhtml_legend=1 00:14:02.529 --rc geninfo_all_blocks=1 00:14:02.529 --rc geninfo_unexecuted_blocks=1 00:14:02.529 00:14:02.529 ' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:02.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:14:02.529 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:10.840 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:10.840 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:10.840 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:10.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:10.840 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:10.841 10.0.0.1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:10.841 10.0.0.2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:10.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.724 ms 00:14:10.841 00:14:10.841 --- 10.0.0.1 ping statistics --- 00:14:10.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.841 rtt min/avg/max/mdev = 0.724/0.724/0.724/0.000 ms 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:10.841 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:10.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:14:10.841 00:14:10.841 --- 10.0.0.2 ping statistics --- 00:14:10.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.841 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:14:10.842 ' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=3002530 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 3002530 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3002530 ']' 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.842 16:38:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.842 [2024-11-05 16:38:17.052290] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:10.842 [2024-11-05 16:38:17.052352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.842 [2024-11-05 16:38:17.138286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.842 [2024-11-05 16:38:17.181118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.842 [2024-11-05 16:38:17.181156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.842 [2024-11-05 16:38:17.181164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.842 [2024-11-05 16:38:17.181172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.842 [2024-11-05 16:38:17.181177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.842 [2024-11-05 16:38:17.183036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.842 [2024-11-05 16:38:17.183207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.842 [2024-11-05 16:38:17.183337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.842 [2024-11-05 16:38:17.183337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.842 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:10.842 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:14:10.842 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:10.842 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.842 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.842 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.843 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.105 [2024-11-05 16:38:17.951807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.105 Malloc0 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.105 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.105 [2024-11-05 16:38:18.010980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3002715 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3002717 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:11.105 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:11.105 { 00:14:11.105 "params": { 00:14:11.105 "name": "Nvme$subsystem", 00:14:11.105 "trtype": "$TEST_TRANSPORT", 00:14:11.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.105 "adrfam": "ipv4", 00:14:11.105 "trsvcid": "$NVMF_PORT", 00:14:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.105 "hdgst": ${hdgst:-false}, 00:14:11.105 "ddgst": ${ddgst:-false} 00:14:11.105 }, 00:14:11.106 "method": "bdev_nvme_attach_controller" 00:14:11.106 } 00:14:11.106 EOF 00:14:11.106 )") 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3002719 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3002722 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:11.106 { 00:14:11.106 "params": { 00:14:11.106 "name": "Nvme$subsystem", 00:14:11.106 "trtype": "$TEST_TRANSPORT", 00:14:11.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.106 "adrfam": "ipv4", 00:14:11.106 "trsvcid": "$NVMF_PORT", 00:14:11.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.106 "hdgst": ${hdgst:-false}, 00:14:11.106 "ddgst": ${ddgst:-false} 00:14:11.106 }, 00:14:11.106 "method": "bdev_nvme_attach_controller" 00:14:11.106 } 00:14:11.106 EOF 00:14:11.106 )") 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:11.106 { 00:14:11.106 "params": { 00:14:11.106 "name": "Nvme$subsystem", 00:14:11.106 "trtype": "$TEST_TRANSPORT", 00:14:11.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.106 "adrfam": "ipv4", 00:14:11.106 "trsvcid": "$NVMF_PORT", 00:14:11.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.106 "hdgst": ${hdgst:-false}, 00:14:11.106 "ddgst": ${ddgst:-false} 00:14:11.106 }, 00:14:11.106 "method": "bdev_nvme_attach_controller" 00:14:11.106 } 00:14:11.106 EOF 00:14:11.106 )") 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:11.106 { 00:14:11.106 "params": { 00:14:11.106 "name": "Nvme$subsystem", 00:14:11.106 "trtype": "$TEST_TRANSPORT", 00:14:11.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.106 "adrfam": "ipv4", 00:14:11.106 "trsvcid": "$NVMF_PORT", 00:14:11.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.106 "hdgst": ${hdgst:-false}, 00:14:11.106 "ddgst": ${ddgst:-false} 00:14:11.106 }, 00:14:11.106 "method": "bdev_nvme_attach_controller" 00:14:11.106 } 00:14:11.106 EOF 00:14:11.106 )") 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3002715 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:11.106 "params": { 00:14:11.106 "name": "Nvme1", 00:14:11.106 "trtype": "tcp", 00:14:11.106 "traddr": "10.0.0.2", 00:14:11.106 "adrfam": "ipv4", 00:14:11.106 "trsvcid": "4420", 00:14:11.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.106 "hdgst": false, 00:14:11.106 "ddgst": false 00:14:11.106 }, 00:14:11.106 "method": "bdev_nvme_attach_controller" 00:14:11.106 }' 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:14:11.106 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:11.106 "params": { 00:14:11.106 "name": "Nvme1", 00:14:11.106 "trtype": "tcp", 00:14:11.106 "traddr": "10.0.0.2", 00:14:11.106 "adrfam": "ipv4", 00:14:11.106 "trsvcid": "4420", 00:14:11.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.106 "hdgst": false, 00:14:11.106 "ddgst": false 00:14:11.107 }, 00:14:11.107 "method": "bdev_nvme_attach_controller" 00:14:11.107 }' 00:14:11.107 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:14:11.107 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:11.107 "params": { 00:14:11.107 "name": "Nvme1", 00:14:11.107 "trtype": "tcp", 00:14:11.107 "traddr": "10.0.0.2", 00:14:11.107 "adrfam": "ipv4", 00:14:11.107 "trsvcid": "4420", 00:14:11.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.107 "hdgst": false, 00:14:11.107 "ddgst": false 00:14:11.107 }, 00:14:11.107 "method": "bdev_nvme_attach_controller" 00:14:11.107 }' 00:14:11.107 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:14:11.107 16:38:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:11.107 "params": { 00:14:11.107 "name": "Nvme1", 00:14:11.107 "trtype": "tcp", 00:14:11.107 "traddr": "10.0.0.2", 00:14:11.107 "adrfam": "ipv4", 00:14:11.107 "trsvcid": "4420", 00:14:11.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.107 "hdgst": false, 00:14:11.107 "ddgst": false 00:14:11.107 }, 00:14:11.107 "method": "bdev_nvme_attach_controller" 00:14:11.107 }' 00:14:11.107 [2024-11-05 16:38:18.066408] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:11.107 [2024-11-05 16:38:18.066462] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:11.107 [2024-11-05 16:38:18.068714] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:11.107 [2024-11-05 16:38:18.068773] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:11.107 [2024-11-05 16:38:18.071206] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:11.107 [2024-11-05 16:38:18.071250] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:11.107 [2024-11-05 16:38:18.074141] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:11.107 [2024-11-05 16:38:18.074200] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:11.369 [2024-11-05 16:38:18.221602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.369 [2024-11-05 16:38:18.251435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:11.369 [2024-11-05 16:38:18.269410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.369 [2024-11-05 16:38:18.307853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:11.369 [2024-11-05 16:38:18.312757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.369 [2024-11-05 16:38:18.341266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:14:11.369 [2024-11-05 16:38:18.355280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.369 [2024-11-05 16:38:18.383515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:11.630 Running I/O for 1 seconds... 00:14:11.630 Running I/O for 1 seconds... 00:14:11.630 Running I/O for 1 seconds... 00:14:11.630 Running I/O for 1 seconds... 00:14:12.572 22658.00 IOPS, 88.51 MiB/s 00:14:12.572 Latency(us) 00:14:12.572 [2024-11-05T15:38:19.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.573 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:12.573 Nvme1n1 : 1.01 22713.97 88.73 0.00 0.00 5621.58 2211.84 12834.13 00:14:12.573 [2024-11-05T15:38:19.636Z] =================================================================================================================== 00:14:12.573 [2024-11-05T15:38:19.636Z] Total : 22713.97 88.73 0.00 0.00 5621.58 2211.84 12834.13 00:14:12.573 6785.00 IOPS, 26.50 MiB/s 00:14:12.573 Latency(us) 00:14:12.573 [2024-11-05T15:38:19.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.573 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:12.573 Nvme1n1 : 1.02 6824.33 26.66 0.00 0.00 18616.55 6553.60 26542.08 00:14:12.573 [2024-11-05T15:38:19.636Z] =================================================================================================================== 00:14:12.573 [2024-11-05T15:38:19.636Z] Total : 6824.33 26.66 0.00 0.00 18616.55 6553.60 26542.08 00:14:12.573 179688.00 IOPS, 701.91 MiB/s 00:14:12.573 Latency(us) 00:14:12.573 [2024-11-05T15:38:19.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.573 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:12.573 Nvme1n1 : 1.00 179321.40 700.47 0.00 0.00 709.79 312.32 2048.00 00:14:12.573 [2024-11-05T15:38:19.636Z] =================================================================================================================== 00:14:12.573 [2024-11-05T15:38:19.636Z] Total : 179321.40 700.47 0.00 0.00 709.79 312.32 2048.00 00:14:12.833 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3002717 00:14:12.833 6935.00 IOPS, 27.09 MiB/s 00:14:12.833 Latency(us) 00:14:12.833 [2024-11-05T15:38:19.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.833 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:12.834 Nvme1n1 : 1.01 7024.94 27.44 0.00 0.00 18161.76 4696.75 43472.21 00:14:12.834 [2024-11-05T15:38:19.897Z] =================================================================================================================== 00:14:12.834 [2024-11-05T15:38:19.897Z] Total : 7024.94 27.44 0.00 0.00 18161.76 4696.75 43472.21 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3002719 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3002722 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:12.834 rmmod nvme_tcp 00:14:12.834 rmmod nvme_fabrics 00:14:12.834 rmmod nvme_keyring 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 3002530 ']' 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 3002530 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3002530 ']' 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3002530 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.834 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3002530 00:14:13.095 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.095 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.095 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3002530' 00:14:13.095 killing process with pid 3002530 00:14:13.095 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3002530 00:14:13.095 16:38:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3002530 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:13.095 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:15.641 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:14:15.641 00:14:15.641 real 0m12.953s 00:14:15.641 user 0m19.058s 00:14:15.641 sys 0m6.996s 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:15.642 ************************************ 00:14:15.642 END TEST nvmf_bdev_io_wait 00:14:15.642 ************************************ 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:15.642 ************************************ 00:14:15.642 START TEST nvmf_queue_depth 00:14:15.642 ************************************ 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:15.642 * Looking for test storage... 00:14:15.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.642 --rc genhtml_branch_coverage=1 00:14:15.642 --rc genhtml_function_coverage=1 00:14:15.642 --rc genhtml_legend=1 00:14:15.642 --rc geninfo_all_blocks=1 00:14:15.642 --rc geninfo_unexecuted_blocks=1 00:14:15.642 00:14:15.642 ' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.642 --rc genhtml_branch_coverage=1 00:14:15.642 --rc genhtml_function_coverage=1 00:14:15.642 --rc genhtml_legend=1 00:14:15.642 --rc geninfo_all_blocks=1 00:14:15.642 --rc geninfo_unexecuted_blocks=1 00:14:15.642 00:14:15.642 ' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.642 --rc genhtml_branch_coverage=1 00:14:15.642 --rc genhtml_function_coverage=1 00:14:15.642 --rc genhtml_legend=1 00:14:15.642 --rc geninfo_all_blocks=1 00:14:15.642 --rc geninfo_unexecuted_blocks=1 00:14:15.642 00:14:15.642 ' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.642 --rc genhtml_branch_coverage=1 00:14:15.642 --rc genhtml_function_coverage=1 00:14:15.642 --rc genhtml_legend=1 00:14:15.642 --rc geninfo_all_blocks=1 00:14:15.642 --rc geninfo_unexecuted_blocks=1 00:14:15.642 00:14:15.642 ' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:15.642 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:15.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:14:15.643 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:14:23.784 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:23.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:23.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:23.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:23.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:23.785 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:23.786 10.0.0.1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:23.786 10.0.0.2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:23.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.617 ms 00:14:23.786 00:14:23.786 --- 10.0.0.1 ping statistics --- 00:14:23.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.786 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:23.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:14:23.786 00:14:23.786 --- 10.0.0.2 ping statistics --- 00:14:23.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.786 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:23.786 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:14:23.787 ' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=3007444 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 3007444 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3007444 ']' 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:23.787 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.787 [2024-11-05 16:38:29.923217] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:23.787 [2024-11-05 16:38:29.923274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.787 [2024-11-05 16:38:30.010822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.787 [2024-11-05 16:38:30.052406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.787 [2024-11-05 16:38:30.052445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.787 [2024-11-05 16:38:30.052453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.787 [2024-11-05 16:38:30.052460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.787 [2024-11-05 16:38:30.052470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.787 [2024-11-05 16:38:30.053081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.787 [2024-11-05 16:38:30.791499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.787 Malloc0 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.787 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.788 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:24.049 [2024-11-05 16:38:30.852774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3007646 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3007646 /var/tmp/bdevperf.sock 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3007646 ']' 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.049 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:24.049 [2024-11-05 16:38:30.911291] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:24.049 [2024-11-05 16:38:30.911354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007646 ] 00:14:24.049 [2024-11-05 16:38:30.986223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.049 [2024-11-05 16:38:31.028383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:24.989 NVMe0n1 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.989 16:38:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.989 Running I/O for 10 seconds... 00:14:27.316 9584.00 IOPS, 37.44 MiB/s [2024-11-05T15:38:35.321Z] 10741.50 IOPS, 41.96 MiB/s [2024-11-05T15:38:36.262Z] 11162.67 IOPS, 43.60 MiB/s [2024-11-05T15:38:37.203Z] 11257.25 IOPS, 43.97 MiB/s [2024-11-05T15:38:38.144Z] 11326.40 IOPS, 44.24 MiB/s [2024-11-05T15:38:39.085Z] 11430.17 IOPS, 44.65 MiB/s [2024-11-05T15:38:40.026Z] 11478.43 IOPS, 44.84 MiB/s [2024-11-05T15:38:41.410Z] 11513.12 IOPS, 44.97 MiB/s [2024-11-05T15:38:42.352Z] 11542.22 IOPS, 45.09 MiB/s [2024-11-05T15:38:42.352Z] 11567.40 IOPS, 45.19 MiB/s 00:14:35.289 Latency(us) 00:14:35.289 [2024-11-05T15:38:42.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.289 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:35.289 Verification LBA range: start 0x0 length 0x4000 00:14:35.289 NVMe0n1 : 10.05 11606.01 45.34 0.00 0.00 87923.84 17913.17 71215.79 00:14:35.289 [2024-11-05T15:38:42.352Z] =================================================================================================================== 00:14:35.289 [2024-11-05T15:38:42.352Z] Total : 11606.01 45.34 0.00 0.00 87923.84 17913.17 71215.79 00:14:35.289 { 00:14:35.289 "results": [ 00:14:35.289 { 00:14:35.289 "job": "NVMe0n1", 00:14:35.289 "core_mask": "0x1", 00:14:35.289 "workload": "verify", 00:14:35.289 "status": "finished", 00:14:35.289 "verify_range": { 00:14:35.289 "start": 0, 00:14:35.289 "length": 16384 00:14:35.289 }, 00:14:35.289 "queue_depth": 1024, 00:14:35.289 "io_size": 4096, 00:14:35.289 "runtime": 10.054102, 00:14:35.289 "iops": 11606.009169192834, 00:14:35.289 "mibps": 45.33597331715951, 00:14:35.289 "io_failed": 0, 00:14:35.289 "io_timeout": 0, 00:14:35.289 "avg_latency_us": 87923.83991224461, 00:14:35.289 "min_latency_us": 17913.173333333332, 00:14:35.289 "max_latency_us": 71215.78666666667 00:14:35.289 } 00:14:35.289 ], 00:14:35.289 "core_count": 1 00:14:35.289 } 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3007646 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3007646 ']' 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3007646 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3007646 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3007646' 00:14:35.289 killing process with pid 3007646 00:14:35.289 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3007646 00:14:35.289 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.289 00:14:35.289 Latency(us) 00:14:35.289 [2024-11-05T15:38:42.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.290 [2024-11-05T15:38:42.353Z] =================================================================================================================== 00:14:35.290 [2024-11-05T15:38:42.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3007646 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:35.290 rmmod nvme_tcp 00:14:35.290 rmmod nvme_fabrics 00:14:35.290 rmmod nvme_keyring 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 3007444 ']' 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 3007444 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3007444 ']' 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3007444 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.290 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3007444 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3007444' 00:14:35.550 killing process with pid 3007444 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3007444 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3007444 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:35.550 16:38:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:38.092 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:38.092 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:14:38.093 00:14:38.093 real 0m22.385s 00:14:38.093 user 0m25.963s 00:14:38.093 sys 0m6.717s 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:38.093 ************************************ 00:14:38.093 END TEST nvmf_queue_depth 00:14:38.093 ************************************ 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:38.093 ************************************ 00:14:38.093 START TEST nvmf_target_multipath 00:14:38.093 ************************************ 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:38.093 * Looking for test storage... 00:14:38.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:38.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.093 --rc genhtml_branch_coverage=1 00:14:38.093 --rc genhtml_function_coverage=1 00:14:38.093 --rc genhtml_legend=1 00:14:38.093 --rc geninfo_all_blocks=1 00:14:38.093 --rc geninfo_unexecuted_blocks=1 00:14:38.093 00:14:38.093 ' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:38.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.093 --rc genhtml_branch_coverage=1 00:14:38.093 --rc genhtml_function_coverage=1 00:14:38.093 --rc genhtml_legend=1 00:14:38.093 --rc geninfo_all_blocks=1 00:14:38.093 --rc geninfo_unexecuted_blocks=1 00:14:38.093 00:14:38.093 ' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:38.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.093 --rc genhtml_branch_coverage=1 00:14:38.093 --rc genhtml_function_coverage=1 00:14:38.093 --rc genhtml_legend=1 00:14:38.093 --rc geninfo_all_blocks=1 00:14:38.093 --rc geninfo_unexecuted_blocks=1 00:14:38.093 00:14:38.093 ' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:38.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.093 --rc genhtml_branch_coverage=1 00:14:38.093 --rc genhtml_function_coverage=1 00:14:38.093 --rc genhtml_legend=1 00:14:38.093 --rc geninfo_all_blocks=1 00:14:38.093 --rc geninfo_unexecuted_blocks=1 00:14:38.093 00:14:38.093 ' 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:38.093 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:38.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:14:38.094 16:38:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:46.245 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:46.245 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.245 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:46.246 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:46.246 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:46.246 16:38:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:46.246 10.0.0.1 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:46.246 10.0.0.2 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:46.246 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:46.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.553 ms 00:14:46.247 00:14:46.247 --- 10.0.0.1 ping statistics --- 00:14:46.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.247 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:46.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:14:46.247 00:14:46.247 --- 10.0.0.2 ping statistics --- 00:14:46.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.247 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:46.247 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:14:46.248 ' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:46.248 only one NIC for nvmf test 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:46.248 rmmod nvme_tcp 00:14:46.248 rmmod nvme_fabrics 00:14:46.248 rmmod nvme_keyring 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:46.248 16:38:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:47.633 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:14:47.634 00:14:47.634 real 0m9.851s 00:14:47.634 user 0m2.150s 00:14:47.634 sys 0m5.644s 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:47.634 ************************************ 00:14:47.634 END TEST nvmf_target_multipath 00:14:47.634 ************************************ 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:47.634 ************************************ 00:14:47.634 START TEST nvmf_zcopy 00:14:47.634 ************************************ 00:14:47.634 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:47.897 * Looking for test storage... 00:14:47.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.897 --rc genhtml_branch_coverage=1 00:14:47.897 --rc genhtml_function_coverage=1 00:14:47.897 --rc genhtml_legend=1 00:14:47.897 --rc geninfo_all_blocks=1 00:14:47.897 --rc geninfo_unexecuted_blocks=1 00:14:47.897 00:14:47.897 ' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.897 --rc genhtml_branch_coverage=1 00:14:47.897 --rc genhtml_function_coverage=1 00:14:47.897 --rc genhtml_legend=1 00:14:47.897 --rc geninfo_all_blocks=1 00:14:47.897 --rc geninfo_unexecuted_blocks=1 00:14:47.897 00:14:47.897 ' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.897 --rc genhtml_branch_coverage=1 00:14:47.897 --rc genhtml_function_coverage=1 00:14:47.897 --rc genhtml_legend=1 00:14:47.897 --rc geninfo_all_blocks=1 00:14:47.897 --rc geninfo_unexecuted_blocks=1 00:14:47.897 00:14:47.897 ' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.897 --rc genhtml_branch_coverage=1 00:14:47.897 --rc genhtml_function_coverage=1 00:14:47.897 --rc genhtml_legend=1 00:14:47.897 --rc geninfo_all_blocks=1 00:14:47.897 --rc geninfo_unexecuted_blocks=1 00:14:47.897 00:14:47.897 ' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:47.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:47.897 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:14:47.898 16:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:56.039 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:56.039 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:56.039 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:56.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:56.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:56.040 10.0.0.1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:56.040 16:39:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:56.040 10.0.0.2 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:56.040 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:56.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.592 ms 00:14:56.041 00:14:56.041 --- 10.0.0.1 ping statistics --- 00:14:56.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.041 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:56.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:14:56.041 00:14:56.041 --- 10.0.0.2 ping statistics --- 00:14:56.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.041 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:56.041 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:14:56.042 ' 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=3018635 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 3018635 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3018635 ']' 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:56.042 16:39:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 [2024-11-05 16:39:02.344119] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:56.042 [2024-11-05 16:39:02.344172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.042 [2024-11-05 16:39:02.441977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.042 [2024-11-05 16:39:02.491951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.042 [2024-11-05 16:39:02.492012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.042 [2024-11-05 16:39:02.492027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.042 [2024-11-05 16:39:02.492035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.042 [2024-11-05 16:39:02.492041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.042 [2024-11-05 16:39:02.492825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 [2024-11-05 16:39:03.227184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 [2024-11-05 16:39:03.251442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 malloc0 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:56.303 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:56.303 { 00:14:56.303 "params": { 00:14:56.303 "name": "Nvme$subsystem", 00:14:56.303 "trtype": "$TEST_TRANSPORT", 00:14:56.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.303 "adrfam": "ipv4", 00:14:56.303 "trsvcid": "$NVMF_PORT", 00:14:56.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.303 "hdgst": ${hdgst:-false}, 00:14:56.303 "ddgst": ${ddgst:-false} 00:14:56.304 }, 00:14:56.304 "method": "bdev_nvme_attach_controller" 00:14:56.304 } 00:14:56.304 EOF 00:14:56.304 )") 00:14:56.304 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:14:56.304 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:14:56.304 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:14:56.304 16:39:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:56.304 "params": { 00:14:56.304 "name": "Nvme1", 00:14:56.304 "trtype": "tcp", 00:14:56.304 "traddr": "10.0.0.2", 00:14:56.304 "adrfam": "ipv4", 00:14:56.304 "trsvcid": "4420", 00:14:56.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.304 "hdgst": false, 00:14:56.304 "ddgst": false 00:14:56.304 }, 00:14:56.304 "method": "bdev_nvme_attach_controller" 00:14:56.304 }' 00:14:56.304 [2024-11-05 16:39:03.350413] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:14:56.304 [2024-11-05 16:39:03.350480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018770 ] 00:14:56.564 [2024-11-05 16:39:03.425559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.564 [2024-11-05 16:39:03.468448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.824 Running I/O for 10 seconds... 00:14:58.708 6637.00 IOPS, 51.85 MiB/s [2024-11-05T15:39:06.712Z] 7342.50 IOPS, 57.36 MiB/s [2024-11-05T15:39:08.096Z] 8123.00 IOPS, 63.46 MiB/s [2024-11-05T15:39:09.038Z] 8511.75 IOPS, 66.50 MiB/s [2024-11-05T15:39:09.976Z] 8746.60 IOPS, 68.33 MiB/s [2024-11-05T15:39:10.918Z] 8901.17 IOPS, 69.54 MiB/s [2024-11-05T15:39:11.861Z] 9010.71 IOPS, 70.40 MiB/s [2024-11-05T15:39:12.801Z] 9093.50 IOPS, 71.04 MiB/s [2024-11-05T15:39:13.741Z] 9159.44 IOPS, 71.56 MiB/s [2024-11-05T15:39:13.741Z] 9212.30 IOPS, 71.97 MiB/s 00:15:06.678 Latency(us) 00:15:06.678 [2024-11-05T15:39:13.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.678 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:06.678 Verification LBA range: start 0x0 length 0x1000 00:15:06.678 Nvme1n1 : 10.01 9212.66 71.97 0.00 0.00 13841.83 1815.89 27634.35 00:15:06.678 [2024-11-05T15:39:13.741Z] =================================================================================================================== 00:15:06.678 [2024-11-05T15:39:13.741Z] Total : 9212.66 71.97 0.00 0.00 13841.83 1815.89 27634.35 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3021458 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:15:06.938 { 00:15:06.938 "params": { 00:15:06.938 "name": "Nvme$subsystem", 00:15:06.938 "trtype": "$TEST_TRANSPORT", 00:15:06.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:06.938 "adrfam": "ipv4", 00:15:06.938 "trsvcid": "$NVMF_PORT", 00:15:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:06.938 "hdgst": ${hdgst:-false}, 00:15:06.938 "ddgst": ${ddgst:-false} 00:15:06.938 }, 00:15:06.938 "method": "bdev_nvme_attach_controller" 00:15:06.938 } 00:15:06.938 EOF 00:15:06.938 )") 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:15:06.938 [2024-11-05 16:39:13.857305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.857336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:15:06.938 16:39:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:15:06.938 "params": { 00:15:06.938 "name": "Nvme1", 00:15:06.938 "trtype": "tcp", 00:15:06.938 "traddr": "10.0.0.2", 00:15:06.938 "adrfam": "ipv4", 00:15:06.938 "trsvcid": "4420", 00:15:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.938 "hdgst": false, 00:15:06.938 "ddgst": false 00:15:06.938 }, 00:15:06.938 "method": "bdev_nvme_attach_controller" 00:15:06.938 }' 00:15:06.938 [2024-11-05 16:39:13.869301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.869310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.881331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.881338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.893360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.893368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.905391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.905399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.910342] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:15:06.938 [2024-11-05 16:39:13.910390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021458 ] 00:15:06.938 [2024-11-05 16:39:13.917422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.917430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.929453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.929460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.941485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.941495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.953515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.953522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.965546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.965554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.977578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.977586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:13.979929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.938 [2024-11-05 16:39:13.989609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:13.989623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.938 [2024-11-05 16:39:14.001639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.938 [2024-11-05 16:39:14.001647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.198 [2024-11-05 16:39:14.013672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.198 [2024-11-05 16:39:14.013681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.198 [2024-11-05 16:39:14.015550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.198 [2024-11-05 16:39:14.025705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.198 [2024-11-05 16:39:14.025713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.198 [2024-11-05 16:39:14.037741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.198 [2024-11-05 16:39:14.037757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.198 [2024-11-05 16:39:14.049783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.198 [2024-11-05 16:39:14.049796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.061799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.061808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.073828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.073835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.085859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.085866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.097906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.097924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.109928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.109939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.121958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.121967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.133987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.133994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.146018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.146025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.158049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.158058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.170081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.170090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.182115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.182123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.230264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.230279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 [2024-11-05 16:39:14.242273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.242282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.199 Running I/O for 5 seconds... 00:15:07.199 [2024-11-05 16:39:14.256539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.199 [2024-11-05 16:39:14.256555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.269872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.269890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.283328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.283344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.297074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.297090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.310281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.310296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.323850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.323866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.337050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.337065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.349971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.349986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.362685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.362700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.375053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.375068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.387663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.387678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.401026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.401041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.414674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.414688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.427884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.427899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.441155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.441169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.454420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.454435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.467298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.467312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.480415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.480430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.494096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.494114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.507391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.507406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.459 [2024-11-05 16:39:14.520713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.459 [2024-11-05 16:39:14.520728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.534246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.534261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.547549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.547564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.560630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.560644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.574114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.574129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.587617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.587631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.600127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.600141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.612468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.612483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.626020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.626035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.639284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.639299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.652812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.652826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.666307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.666322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.679984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.679999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.720 [2024-11-05 16:39:14.692491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.720 [2024-11-05 16:39:14.692506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.706032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.706047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.719179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.719193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.732092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.732107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.745441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.745459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.758505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.758520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.771388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.771402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.721 [2024-11-05 16:39:14.784262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.721 [2024-11-05 16:39:14.784277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.797642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.797657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.810543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.810557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.824218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.824233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.836881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.836896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.849490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.849504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.863175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.863190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.875983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.875997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.889637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.889652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.902465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.902480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.916284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.916299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.928979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.928994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.942356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.942371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.955868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.955883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.969207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.969222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.982244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.982259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:14.994687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:14.994709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:15.008373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:15.008388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:15.021194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:15.021210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.981 [2024-11-05 16:39:15.034736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.981 [2024-11-05 16:39:15.034755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.047557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.047572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.061323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.061338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.074573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.074589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.087179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.087194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.099902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.099917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.112446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.112461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.126007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.126021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.139527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.139542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.153084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.153099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.165861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.165876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.178389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.178405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.192110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.192125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.204646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.204661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.217363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.217379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.230897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.230912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.243960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.243979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 19085.00 IOPS, 149.10 MiB/s [2024-11-05T15:39:15.304Z] [2024-11-05 16:39:15.256487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.256502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.241 [2024-11-05 16:39:15.269970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.241 [2024-11-05 16:39:15.269985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.242 [2024-11-05 16:39:15.283349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.242 [2024-11-05 16:39:15.283364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.242 [2024-11-05 16:39:15.296860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.242 [2024-11-05 16:39:15.296875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.309465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.309480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.322902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.322917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.336787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.336801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.349927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.349941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.363311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.363326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.376527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.376541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.388732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.388752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.401376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.401391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.414584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.414599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.427501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.427516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.439801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.439816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.453149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.453163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.466532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.466547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.479204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.479218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.492408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.492423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.505610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.505625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.519455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.519471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.532897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.518 [2024-11-05 16:39:15.532914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.518 [2024-11-05 16:39:15.546448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.519 [2024-11-05 16:39:15.546463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.519 [2024-11-05 16:39:15.559024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.519 [2024-11-05 16:39:15.559039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.519 [2024-11-05 16:39:15.571502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.519 [2024-11-05 16:39:15.571517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.584152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.584167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.597514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.597529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.610321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.610336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.623651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.623665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.636866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.636881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.650002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.650017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.663579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.663594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.676111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.676126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.688938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.688952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.701423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.701438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.714898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.714912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.728289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.728303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.742121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.742136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.754538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.754553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.767762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.767777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.781349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.781363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.794457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.794471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.807833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.807848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.821364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.821379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.834097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.834112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.847102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.847116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.860098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.860113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.825 [2024-11-05 16:39:15.872410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.825 [2024-11-05 16:39:15.872425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.885612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.885627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.899152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.899167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.912264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.912279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.925725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.925740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.938701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.938715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.951504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.951519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.965076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.965090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.978264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.978278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:15.991528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:15.991542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.004097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:16.004111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.017594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:16.017608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.031262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:16.031277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.043972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:16.043987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.057583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:16.057598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.070462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.141 [2024-11-05 16:39:16.070477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.141 [2024-11-05 16:39:16.082833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.082847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.095525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.095539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.108141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.108156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.121006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.121020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.133647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.133662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.145966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.145980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.158596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.158611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.171480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.171495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.142 [2024-11-05 16:39:16.184262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.142 [2024-11-05 16:39:16.184276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.197164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.197178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.210484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.210498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.223863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.223881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.236878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.236893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.249958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.249973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 19217.50 IOPS, 150.14 MiB/s [2024-11-05T15:39:16.498Z] [2024-11-05 16:39:16.262826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.262840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.275545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.275559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.288428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.288442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.300935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.300950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.313652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.313667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.326847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.326862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.339728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.339743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.352308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.352322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.365601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.365615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.378433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.378448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.391831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.391846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.405042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.405057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.418438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.418452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.431796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.431810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.444775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.444790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.457388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.457402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.469847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.469866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.482914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.482929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.435 [2024-11-05 16:39:16.496002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.435 [2024-11-05 16:39:16.496017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.509317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.509332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.522122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.522136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.534344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.534359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.547970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.547985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.560648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.560664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.573920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.573935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.587168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.587183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.600864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.600879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.613895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.613910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.627646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.627660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.641029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.695 [2024-11-05 16:39:16.641044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.695 [2024-11-05 16:39:16.654718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.654733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.668106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.668121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.681431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.681446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.694726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.694741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.707619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.707634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.721024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.721043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.734587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.734601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.747193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.747207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.696 [2024-11-05 16:39:16.759698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.696 [2024-11-05 16:39:16.759713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.772888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.772903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.785002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.785017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.798247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.798262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.811539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.811554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.824867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.824882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.838124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.838139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.851720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.851735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.864342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.864357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.876587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.876602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.889630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.889645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.903113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.903128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.916586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.916601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.955 [2024-11-05 16:39:16.929305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.955 [2024-11-05 16:39:16.929320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.956 [2024-11-05 16:39:16.943096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.956 [2024-11-05 16:39:16.943111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.956 [2024-11-05 16:39:16.956068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.956 [2024-11-05 16:39:16.956083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.956 [2024-11-05 16:39:16.969586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.956 [2024-11-05 16:39:16.969601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.956 [2024-11-05 16:39:16.982124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.956 [2024-11-05 16:39:16.982139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.956 [2024-11-05 16:39:16.995446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.956 [2024-11-05 16:39:16.995460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.956 [2024-11-05 16:39:17.007976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:09.956 [2024-11-05 16:39:17.007991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.020795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.020810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.034472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.034487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.047861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.047876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.061066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.061081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.074328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.074343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.087727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.087742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.101095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.101109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.114054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.114069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.126635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.126650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.140224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.140239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.153965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.153981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.166981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.166995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.180537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.180553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.193444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.193460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.206376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.206391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.219120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.219135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.231872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.231887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.244794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.244809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 19238.67 IOPS, 150.30 MiB/s [2024-11-05T15:39:17.279Z] [2024-11-05 16:39:17.257282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.216 [2024-11-05 16:39:17.257296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.216 [2024-11-05 16:39:17.270281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.217 [2024-11-05 16:39:17.270295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.283766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.283781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.297159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.297173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.310617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.310631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.324099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.324114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.336207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.336221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.349304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.349319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.362654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.362669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.375696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.375711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.389253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.389268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.402290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.477 [2024-11-05 16:39:17.402305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.477 [2024-11-05 16:39:17.415820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.415835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.428717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.428731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.441733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.441752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.455013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.455031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.468508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.468523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.481964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.481979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.495197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.495211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.508117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.508132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.521285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.521299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.478 [2024-11-05 16:39:17.535016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.478 [2024-11-05 16:39:17.535031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.738 [2024-11-05 16:39:17.547482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.738 [2024-11-05 16:39:17.547497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.738 [2024-11-05 16:39:17.560898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.560912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.574414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.574428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.587574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.587588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.600995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.601010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.614392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.614407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.627352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.627367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.639673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.639688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.653473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.653488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.666573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.666588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.679734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.679752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.693206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.693221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.705740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.705762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.718188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.718202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.731550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.731564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.744921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.744936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.757883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.757898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.771072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.771086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.784025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.784040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:10.739 [2024-11-05 16:39:17.796993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:10.739 [2024-11-05 16:39:17.797007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.810206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.810221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.822653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.822668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.835418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.835433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.849116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.849131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.861632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.861647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.875130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.875144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.888044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.888058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.901514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.901529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.915083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.915097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.928584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.928598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.941985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.942000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.954943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.954961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.968518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.968533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.981932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.981946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.001 [2024-11-05 16:39:17.995162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.001 [2024-11-05 16:39:17.995177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.002 [2024-11-05 16:39:18.008567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.002 [2024-11-05 16:39:18.008582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.002 [2024-11-05 16:39:18.021851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.002 [2024-11-05 16:39:18.021866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.002 [2024-11-05 16:39:18.035265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.002 [2024-11-05 16:39:18.035280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.002 [2024-11-05 16:39:18.048318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.002 [2024-11-05 16:39:18.048332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.002 [2024-11-05 16:39:18.061430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.002 [2024-11-05 16:39:18.061445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.074698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.074713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.087867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.087882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.100518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.100533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.112955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.112970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.126179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.126193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.138587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.138600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.151915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.151930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.165400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.165414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.178628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.178643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.192081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.192096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.205485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.205504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.218779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.218794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.232280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.232295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.245132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.245147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 19259.75 IOPS, 150.47 MiB/s [2024-11-05T15:39:18.325Z] [2024-11-05 16:39:18.258856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.258870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.271538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.271553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.283984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.283999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.296657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.296672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.308892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.308906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.262 [2024-11-05 16:39:18.321233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.262 [2024-11-05 16:39:18.321248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.334625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.334640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.347968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.347984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.361278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.361293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.374920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.374934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.388625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.388640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.400883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.400899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.414262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.414276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.427838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.427853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.440963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.440978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.454323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.454338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.467261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.467277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.480698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.480713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.494046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.494061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.506506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.506521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.519642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.519656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.532318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.532333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.545465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.545481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.558167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.558182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.570809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.570824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.523 [2024-11-05 16:39:18.584084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.523 [2024-11-05 16:39:18.584098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.783 [2024-11-05 16:39:18.596984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.783 [2024-11-05 16:39:18.596998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.783 [2024-11-05 16:39:18.610428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.783 [2024-11-05 16:39:18.610443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.783 [2024-11-05 16:39:18.623851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.783 [2024-11-05 16:39:18.623866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.783 [2024-11-05 16:39:18.637489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.783 [2024-11-05 16:39:18.637504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.783 [2024-11-05 16:39:18.650181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.783 [2024-11-05 16:39:18.650196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.663332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.663347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.676548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.676563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.688777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.688792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.702228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.702243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.715477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.715492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.728899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.728914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.742109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.742124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.755381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.755396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.768721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.768737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.782043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.782058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.795393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.795408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.808722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.808737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.822471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.822486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.835298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.835312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:11.784 [2024-11-05 16:39:18.848496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:11.784 [2024-11-05 16:39:18.848511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.861799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.861814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.874968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.874983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.888693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.888708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.901630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.901644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.915246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.915261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.928062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.928076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.941363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.941377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.954107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.954122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.967197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.967211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.980693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.980707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:18.994126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:18.994141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.007736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.007754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.020840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.020855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.033505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.033520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.046535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.046550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.059268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.059282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.072674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.072689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.085120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.085134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.044 [2024-11-05 16:39:19.097549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.044 [2024-11-05 16:39:19.097564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.109997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.110012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.123314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.123329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.135847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.135862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.149158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.149174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.162376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.162391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.175266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.175281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.187799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.187817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.201069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.201084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.214140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.214155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.226835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.226849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.239429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.239443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.251959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.251974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 19260.00 IOPS, 150.47 MiB/s 00:15:12.305 Latency(us) 00:15:12.305 [2024-11-05T15:39:19.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.305 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:12.305 Nvme1n1 : 5.00 19270.72 150.55 0.00 0.00 6636.89 2662.40 17257.81 00:15:12.305 [2024-11-05T15:39:19.368Z] =================================================================================================================== 00:15:12.305 [2024-11-05T15:39:19.368Z] Total : 19270.72 150.55 0.00 0.00 6636.89 2662.40 17257.81 00:15:12.305 [2024-11-05 16:39:19.261874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.261887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.273900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.273912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.285937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.285949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.297961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.297972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.309992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.310003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.322020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.322029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.334049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.334057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.346081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.346088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.305 [2024-11-05 16:39:19.358115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.305 [2024-11-05 16:39:19.358125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.566 [2024-11-05 16:39:19.370144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:12.566 [2024-11-05 16:39:19.370153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3021458) - No such process 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3021458 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:12.566 delay0 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.566 16:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:12.566 [2024-11-05 16:39:19.518073] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:20.699 Initializing NVMe Controllers 00:15:20.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:20.699 Initialization complete. Launching workers. 00:15:20.699 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 27822 00:15:20.699 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27945, failed to submit 116 00:15:20.699 success 27876, unsuccessful 69, failed 0 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:20.699 rmmod nvme_tcp 00:15:20.699 rmmod nvme_fabrics 00:15:20.699 rmmod nvme_keyring 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 3018635 ']' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 3018635 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3018635 ']' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3018635 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3018635 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3018635' 00:15:20.699 killing process with pid 3018635 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3018635 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3018635 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:20.699 16:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:15:22.085 00:15:22.085 real 0m34.335s 00:15:22.085 user 0m45.511s 00:15:22.085 sys 0m11.505s 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:22.085 ************************************ 00:15:22.085 END TEST nvmf_zcopy 00:15:22.085 ************************************ 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.085 16:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:22.085 ************************************ 00:15:22.085 START TEST nvmf_nmic 00:15:22.085 ************************************ 00:15:22.085 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:22.085 * Looking for test storage... 00:15:22.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.085 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:22.085 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:22.085 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.346 --rc genhtml_branch_coverage=1 00:15:22.346 --rc genhtml_function_coverage=1 00:15:22.346 --rc genhtml_legend=1 00:15:22.346 --rc geninfo_all_blocks=1 00:15:22.346 --rc geninfo_unexecuted_blocks=1 00:15:22.346 00:15:22.346 ' 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.346 --rc genhtml_branch_coverage=1 00:15:22.346 --rc genhtml_function_coverage=1 00:15:22.346 --rc genhtml_legend=1 00:15:22.346 --rc geninfo_all_blocks=1 00:15:22.346 --rc geninfo_unexecuted_blocks=1 00:15:22.346 00:15:22.346 ' 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.346 --rc genhtml_branch_coverage=1 00:15:22.346 --rc genhtml_function_coverage=1 00:15:22.346 --rc genhtml_legend=1 00:15:22.346 --rc geninfo_all_blocks=1 00:15:22.346 --rc geninfo_unexecuted_blocks=1 00:15:22.346 00:15:22.346 ' 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.346 --rc genhtml_branch_coverage=1 00:15:22.346 --rc genhtml_function_coverage=1 00:15:22.346 --rc genhtml_legend=1 00:15:22.346 --rc geninfo_all_blocks=1 00:15:22.346 --rc geninfo_unexecuted_blocks=1 00:15:22.346 00:15:22.346 ' 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.346 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:22.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:15:22.347 16:39:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:30.487 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:30.487 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:30.487 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:30.487 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:15:30.487 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:30.488 10.0.0.1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:30.488 10.0.0.2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:30.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.589 ms 00:15:30.488 00:15:30.488 --- 10.0.0.1 ping statistics --- 00:15:30.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.488 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:30.488 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:30.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:15:30.489 00:15:30.489 --- 10.0.0.2 ping statistics --- 00:15:30.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.489 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:15:30.489 ' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=3028183 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 3028183 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3028183 ']' 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:30.489 16:39:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.489 [2024-11-05 16:39:36.722788] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:15:30.489 [2024-11-05 16:39:36.722838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.489 [2024-11-05 16:39:36.802084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.489 [2024-11-05 16:39:36.839821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.489 [2024-11-05 16:39:36.839854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.489 [2024-11-05 16:39:36.839862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.489 [2024-11-05 16:39:36.839869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.489 [2024-11-05 16:39:36.839875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.489 [2024-11-05 16:39:36.841609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.489 [2024-11-05 16:39:36.841723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.489 [2024-11-05 16:39:36.841881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.489 [2024-11-05 16:39:36.841881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.489 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:30.489 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:15:30.489 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:30.489 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.489 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 [2024-11-05 16:39:37.567576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 Malloc0 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 [2024-11-05 16:39:37.639965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:30.750 test case1: single bdev can't be used in multiple subsystems 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.750 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.750 [2024-11-05 16:39:37.675843] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:30.751 [2024-11-05 16:39:37.675874] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:30.751 [2024-11-05 16:39:37.675882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.751 request: 00:15:30.751 { 00:15:30.751 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:30.751 "namespace": { 00:15:30.751 "bdev_name": "Malloc0", 00:15:30.751 "no_auto_visible": false 00:15:30.751 }, 00:15:30.751 "method": "nvmf_subsystem_add_ns", 00:15:30.751 "req_id": 1 00:15:30.751 } 00:15:30.751 Got JSON-RPC error response 00:15:30.751 response: 00:15:30.751 { 00:15:30.751 "code": -32602, 00:15:30.751 "message": "Invalid parameters" 00:15:30.751 } 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:30.751 Adding namespace failed - expected result. 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:30.751 test case2: host connect to nvmf target in multiple paths 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:30.751 [2024-11-05 16:39:37.687994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.751 16:39:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.663 16:39:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:34.069 16:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.069 16:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:15:34.069 16:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.069 16:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:34.069 16:39:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:15:35.980 16:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:35.980 [global] 00:15:35.980 thread=1 00:15:35.980 invalidate=1 00:15:35.980 rw=write 00:15:35.980 time_based=1 00:15:35.980 runtime=1 00:15:35.980 ioengine=libaio 00:15:35.980 direct=1 00:15:35.980 bs=4096 00:15:35.980 iodepth=1 00:15:35.980 norandommap=0 00:15:35.980 numjobs=1 00:15:35.980 00:15:35.980 verify_dump=1 00:15:35.980 verify_backlog=512 00:15:35.980 verify_state_save=0 00:15:35.980 do_verify=1 00:15:35.980 verify=crc32c-intel 00:15:35.980 [job0] 00:15:35.980 filename=/dev/nvme0n1 00:15:35.980 Could not set queue depth (nvme0n1) 00:15:36.240 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:36.240 fio-3.35 00:15:36.240 Starting 1 thread 00:15:37.624 00:15:37.624 job0: (groupid=0, jobs=1): err= 0: pid=3029729: Tue Nov 5 16:39:44 2024 00:15:37.624 read: IOPS=17, BW=70.3KiB/s (72.0kB/s)(72.0KiB/1024msec) 00:15:37.624 slat (nsec): min=25971, max=31809, avg=26875.50, stdev=1249.04 00:15:37.624 clat (usec): min=1068, max=42982, avg=39783.54, stdev=9673.48 00:15:37.624 lat (usec): min=1094, max=43008, avg=39810.42, stdev=9673.48 00:15:37.624 clat percentiles (usec): 00:15:37.624 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:15:37.624 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:37.624 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:15:37.624 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:37.624 | 99.99th=[42730] 00:15:37.624 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:15:37.624 slat (nsec): min=8911, max=64167, avg=28839.91, stdev=10241.20 00:15:37.624 clat (usec): min=268, max=798, avg=563.62, stdev=93.84 00:15:37.624 lat (usec): min=296, max=832, avg=592.46, stdev=98.73 00:15:37.624 clat percentiles (usec): 00:15:37.624 | 1.00th=[ 347], 5.00th=[ 404], 10.00th=[ 433], 20.00th=[ 486], 00:15:37.624 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:15:37.624 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 717], 00:15:37.624 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 799], 99.95th=[ 799], 00:15:37.624 | 99.99th=[ 799] 00:15:37.624 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:37.624 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:37.624 lat (usec) : 500=23.40%, 750=72.45%, 1000=0.75% 00:15:37.624 lat (msec) : 2=0.19%, 50=3.21% 00:15:37.624 cpu : usr=0.98%, sys=1.86%, ctx=530, majf=0, minf=1 00:15:37.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:37.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.624 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:37.624 00:15:37.624 Run status group 0 (all jobs): 00:15:37.624 READ: bw=70.3KiB/s (72.0kB/s), 70.3KiB/s-70.3KiB/s (72.0kB/s-72.0kB/s), io=72.0KiB (73.7kB), run=1024-1024msec 00:15:37.624 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:15:37.624 00:15:37.624 Disk stats (read/write): 00:15:37.624 nvme0n1: ios=65/512, merge=0/0, ticks=645/232, in_queue=877, util=93.59% 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:37.624 rmmod nvme_tcp 00:15:37.624 rmmod nvme_fabrics 00:15:37.624 rmmod nvme_keyring 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 3028183 ']' 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 3028183 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3028183 ']' 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3028183 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.624 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3028183 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3028183' 00:15:37.884 killing process with pid 3028183 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3028183 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3028183 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:37.884 16:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:15:40.427 00:15:40.427 real 0m17.930s 00:15:40.427 user 0m50.389s 00:15:40.427 sys 0m6.307s 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 ************************************ 00:15:40.427 END TEST nvmf_nmic 00:15:40.427 ************************************ 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:40.427 16:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:40.427 ************************************ 00:15:40.427 START TEST nvmf_fio_target 00:15:40.427 ************************************ 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:40.427 * Looking for test storage... 00:15:40.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:40.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.427 --rc genhtml_branch_coverage=1 00:15:40.427 --rc genhtml_function_coverage=1 00:15:40.427 --rc genhtml_legend=1 00:15:40.427 --rc geninfo_all_blocks=1 00:15:40.427 --rc geninfo_unexecuted_blocks=1 00:15:40.427 00:15:40.427 ' 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:40.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.427 --rc genhtml_branch_coverage=1 00:15:40.427 --rc genhtml_function_coverage=1 00:15:40.427 --rc genhtml_legend=1 00:15:40.427 --rc geninfo_all_blocks=1 00:15:40.427 --rc geninfo_unexecuted_blocks=1 00:15:40.427 00:15:40.427 ' 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:40.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.427 --rc genhtml_branch_coverage=1 00:15:40.427 --rc genhtml_function_coverage=1 00:15:40.427 --rc genhtml_legend=1 00:15:40.427 --rc geninfo_all_blocks=1 00:15:40.427 --rc geninfo_unexecuted_blocks=1 00:15:40.427 00:15:40.427 ' 00:15:40.427 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:40.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.427 --rc genhtml_branch_coverage=1 00:15:40.427 --rc genhtml_function_coverage=1 00:15:40.427 --rc genhtml_legend=1 00:15:40.427 --rc geninfo_all_blocks=1 00:15:40.427 --rc geninfo_unexecuted_blocks=1 00:15:40.427 00:15:40.428 ' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:40.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:15:40.428 16:39:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:48.566 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:48.566 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:48.566 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:48.566 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:48.566 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:48.567 10.0.0.1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:48.567 10.0.0.2 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:48.567 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:48.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.647 ms 00:15:48.568 00:15:48.568 --- 10.0.0.1 ping statistics --- 00:15:48.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.568 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:48.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:15:48.568 00:15:48.568 --- 10.0.0.2 ping statistics --- 00:15:48.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.568 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:48.568 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:15:48.569 ' 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=3034270 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 3034270 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3034270 ']' 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:48.569 16:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 [2024-11-05 16:39:54.676119] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:15:48.569 [2024-11-05 16:39:54.676184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.569 [2024-11-05 16:39:54.763571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.569 [2024-11-05 16:39:54.805884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.569 [2024-11-05 16:39:54.805926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.569 [2024-11-05 16:39:54.805934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.569 [2024-11-05 16:39:54.805941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.569 [2024-11-05 16:39:54.805947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.569 [2024-11-05 16:39:54.807539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.569 [2024-11-05 16:39:54.807656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.569 [2024-11-05 16:39:54.807817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.569 [2024-11-05 16:39:54.807818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.569 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:48.829 [2024-11-05 16:39:55.683340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.829 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.091 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:49.091 16:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.091 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:49.091 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.351 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:49.351 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.611 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:49.611 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:49.611 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.871 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:49.871 16:39:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.132 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:50.132 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.392 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:50.392 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:50.392 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:50.652 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:50.652 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.912 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:50.912 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:50.912 16:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.172 [2024-11-05 16:39:58.120973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.172 16:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:51.476 16:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:51.476 16:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.394 16:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:53.394 16:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:15:53.394 16:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.394 16:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:15:53.394 16:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:15:53.394 16:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:15:55.318 16:40:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:55.318 [global] 00:15:55.318 thread=1 00:15:55.318 invalidate=1 00:15:55.318 rw=write 00:15:55.318 time_based=1 00:15:55.318 runtime=1 00:15:55.318 ioengine=libaio 00:15:55.318 direct=1 00:15:55.318 bs=4096 00:15:55.318 iodepth=1 00:15:55.318 norandommap=0 00:15:55.318 numjobs=1 00:15:55.318 00:15:55.318 verify_dump=1 00:15:55.318 verify_backlog=512 00:15:55.318 verify_state_save=0 00:15:55.318 do_verify=1 00:15:55.318 verify=crc32c-intel 00:15:55.318 [job0] 00:15:55.318 filename=/dev/nvme0n1 00:15:55.318 [job1] 00:15:55.318 filename=/dev/nvme0n2 00:15:55.318 [job2] 00:15:55.318 filename=/dev/nvme0n3 00:15:55.318 [job3] 00:15:55.318 filename=/dev/nvme0n4 00:15:55.318 Could not set queue depth (nvme0n1) 00:15:55.318 Could not set queue depth (nvme0n2) 00:15:55.318 Could not set queue depth (nvme0n3) 00:15:55.318 Could not set queue depth (nvme0n4) 00:15:55.578 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.578 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.578 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.578 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.578 fio-3.35 00:15:55.578 Starting 4 threads 00:15:56.994 00:15:56.994 job0: (groupid=0, jobs=1): err= 0: pid=3036019: Tue Nov 5 16:40:03 2024 00:15:56.994 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:15:56.994 slat (nsec): min=25480, max=26417, avg=25807.94, stdev=229.34 00:15:56.994 clat (usec): min=1144, max=43024, avg=39669.81, stdev=9939.99 00:15:56.994 lat (usec): min=1170, max=43050, avg=39695.61, stdev=9940.00 00:15:56.994 clat percentiles (usec): 00:15:56.994 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:15:56.994 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:15:56.994 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:15:56.994 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:15:56.994 | 99.99th=[43254] 00:15:56.994 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:15:56.994 slat (nsec): min=10087, max=69949, avg=31016.91, stdev=9595.64 00:15:56.994 clat (usec): min=138, max=3225, avg=608.41, stdev=174.26 00:15:56.994 lat (usec): min=149, max=3260, avg=639.43, stdev=177.06 00:15:56.994 clat percentiles (usec): 00:15:56.994 | 1.00th=[ 306], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 494], 00:15:56.994 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 652], 00:15:56.994 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 807], 00:15:56.994 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 3228], 99.95th=[ 3228], 00:15:56.994 | 99.99th=[ 3228] 00:15:56.994 bw ( KiB/s): min= 4087, max= 4087, per=45.87%, avg=4087.00, stdev= 0.00, samples=1 00:15:56.994 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:15:56.994 lat (usec) : 250=0.19%, 500=20.60%, 750=65.97%, 1000=9.83% 00:15:56.994 lat (msec) : 2=0.19%, 4=0.19%, 50=3.02% 00:15:56.994 cpu : usr=0.80%, sys=1.49%, ctx=531, majf=0, minf=1 00:15:56.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.994 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.994 job1: (groupid=0, jobs=1): err= 0: pid=3036020: Tue Nov 5 16:40:03 2024 00:15:56.994 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(100KiB/1025msec) 00:15:56.994 slat (nsec): min=25831, max=26753, avg=26350.12, stdev=240.10 00:15:56.994 clat (usec): min=500, max=41958, avg=31390.94, stdev=17631.79 00:15:56.994 lat (usec): min=527, max=41984, avg=31417.29, stdev=17631.83 00:15:56.994 clat percentiles (usec): 00:15:56.994 | 1.00th=[ 502], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 709], 00:15:56.994 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:56.994 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:15:56.994 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:56.994 | 99.99th=[42206] 00:15:56.995 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:15:56.995 slat (nsec): min=9663, max=67916, avg=30653.22, stdev=10092.86 00:15:56.995 clat (usec): min=102, max=760, avg=429.07, stdev=135.67 00:15:56.995 lat (usec): min=114, max=824, avg=459.72, stdev=138.44 00:15:56.995 clat percentiles (usec): 00:15:56.995 | 1.00th=[ 135], 5.00th=[ 217], 10.00th=[ 251], 20.00th=[ 310], 00:15:56.995 | 30.00th=[ 343], 40.00th=[ 383], 50.00th=[ 429], 60.00th=[ 465], 00:15:56.995 | 70.00th=[ 510], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 652], 00:15:56.995 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 758], 99.95th=[ 758], 00:15:56.995 | 99.99th=[ 758] 00:15:56.995 bw ( KiB/s): min= 4096, max= 4096, per=45.97%, avg=4096.00, stdev= 0.00, samples=1 00:15:56.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:56.995 lat (usec) : 250=9.50%, 500=56.42%, 750=30.17%, 1000=0.37% 00:15:56.995 lat (msec) : 50=3.54% 00:15:56.995 cpu : usr=1.07%, sys=1.17%, ctx=538, majf=0, minf=1 00:15:56.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.995 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.995 job2: (groupid=0, jobs=1): err= 0: pid=3036021: Tue Nov 5 16:40:03 2024 00:15:56.995 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:15:56.995 slat (nsec): min=7989, max=62054, avg=26076.96, stdev=2306.28 00:15:56.995 clat (usec): min=464, max=41450, avg=1024.65, stdev=1791.97 00:15:56.995 lat (usec): min=505, max=41475, avg=1050.73, stdev=1791.94 00:15:56.995 clat percentiles (usec): 00:15:56.995 | 1.00th=[ 652], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 898], 00:15:56.995 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:15:56.995 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:15:56.995 | 99.00th=[ 1123], 99.50th=[ 1123], 99.90th=[41681], 99.95th=[41681], 00:15:56.995 | 99.99th=[41681] 00:15:56.995 write: IOPS=746, BW=2985KiB/s (3057kB/s)(2988KiB/1001msec); 0 zone resets 00:15:56.995 slat (nsec): min=9425, max=94269, avg=30514.40, stdev=9279.96 00:15:56.995 clat (usec): min=125, max=934, avg=575.65, stdev=135.95 00:15:56.995 lat (usec): min=136, max=967, avg=606.17, stdev=138.79 00:15:56.995 clat percentiles (usec): 00:15:56.995 | 1.00th=[ 247], 5.00th=[ 343], 10.00th=[ 379], 20.00th=[ 461], 00:15:56.995 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:15:56.995 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 775], 00:15:56.995 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:15:56.995 | 99.99th=[ 938] 00:15:56.995 bw ( KiB/s): min= 4096, max= 4096, per=45.97%, avg=4096.00, stdev= 0.00, samples=1 00:15:56.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:56.995 lat (usec) : 250=0.71%, 500=15.41%, 750=38.84%, 1000=37.25% 00:15:56.995 lat (msec) : 2=7.70%, 50=0.08% 00:15:56.995 cpu : usr=1.70%, sys=3.90%, ctx=1260, majf=0, minf=2 00:15:56.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.995 issued rwts: total=512,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.995 job3: (groupid=0, jobs=1): err= 0: pid=3036022: Tue Nov 5 16:40:03 2024 00:15:56.995 read: IOPS=302, BW=1211KiB/s (1240kB/s)(1236KiB/1021msec) 00:15:56.995 slat (nsec): min=8111, max=28905, avg=25848.23, stdev=1482.03 00:15:56.995 clat (usec): min=569, max=42096, avg=2144.18, stdev=6900.16 00:15:56.995 lat (usec): min=594, max=42121, avg=2170.03, stdev=6900.15 00:15:56.995 clat percentiles (usec): 00:15:56.995 | 1.00th=[ 635], 5.00th=[ 766], 10.00th=[ 816], 20.00th=[ 898], 00:15:56.995 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 988], 00:15:56.995 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1106], 00:15:56.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:56.995 | 99.99th=[42206] 00:15:56.995 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:15:56.995 slat (nsec): min=9810, max=83247, avg=30482.75, stdev=9228.76 00:15:56.995 clat (usec): min=311, max=941, avg=640.53, stdev=122.11 00:15:56.995 lat (usec): min=345, max=975, avg=671.01, stdev=125.87 00:15:56.995 clat percentiles (usec): 00:15:56.995 | 1.00th=[ 363], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 545], 00:15:56.995 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:15:56.995 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 832], 00:15:56.995 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 938], 99.95th=[ 938], 00:15:56.995 | 99.99th=[ 938] 00:15:56.995 bw ( KiB/s): min= 4096, max= 4096, per=45.97%, avg=4096.00, stdev= 0.00, samples=1 00:15:56.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:56.995 lat (usec) : 500=8.16%, 750=43.36%, 1000=35.57% 00:15:56.995 lat (msec) : 2=11.81%, 50=1.10% 00:15:56.995 cpu : usr=0.88%, sys=2.65%, ctx=821, majf=0, minf=2 00:15:56.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.995 issued rwts: total=309,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.995 00:15:56.995 Run status group 0 (all jobs): 00:15:56.995 READ: bw=3368KiB/s (3449kB/s), 67.6KiB/s-2046KiB/s (69.2kB/s-2095kB/s), io=3452KiB (3535kB), run=1001-1025msec 00:15:56.995 WRITE: bw=8909KiB/s (9123kB/s), 1998KiB/s-2985KiB/s (2046kB/s-3057kB/s), io=9132KiB (9351kB), run=1001-1025msec 00:15:56.995 00:15:56.995 Disk stats (read/write): 00:15:56.995 nvme0n1: ios=37/512, merge=0/0, ticks=1425/299, in_queue=1724, util=96.49% 00:15:56.995 nvme0n2: ios=70/512, merge=0/0, ticks=1037/210, in_queue=1247, util=97.14% 00:15:56.995 nvme0n3: ios=483/512, merge=0/0, ticks=511/280, in_queue=791, util=88.36% 00:15:56.995 nvme0n4: ios=304/512, merge=0/0, ticks=459/314, in_queue=773, util=89.51% 00:15:56.995 16:40:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:56.995 [global] 00:15:56.995 thread=1 00:15:56.995 invalidate=1 00:15:56.995 rw=randwrite 00:15:56.995 time_based=1 00:15:56.995 runtime=1 00:15:56.995 ioengine=libaio 00:15:56.995 direct=1 00:15:56.995 bs=4096 00:15:56.995 iodepth=1 00:15:56.995 norandommap=0 00:15:56.995 numjobs=1 00:15:56.995 00:15:56.995 verify_dump=1 00:15:56.995 verify_backlog=512 00:15:56.995 verify_state_save=0 00:15:56.995 do_verify=1 00:15:56.995 verify=crc32c-intel 00:15:56.995 [job0] 00:15:56.995 filename=/dev/nvme0n1 00:15:56.995 [job1] 00:15:56.995 filename=/dev/nvme0n2 00:15:56.995 [job2] 00:15:56.995 filename=/dev/nvme0n3 00:15:56.995 [job3] 00:15:56.995 filename=/dev/nvme0n4 00:15:56.995 Could not set queue depth (nvme0n1) 00:15:56.995 Could not set queue depth (nvme0n2) 00:15:56.995 Could not set queue depth (nvme0n3) 00:15:56.995 Could not set queue depth (nvme0n4) 00:15:57.257 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.257 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.257 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.257 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.257 fio-3.35 00:15:57.257 Starting 4 threads 00:15:58.661 00:15:58.661 job0: (groupid=0, jobs=1): err= 0: pid=3036543: Tue Nov 5 16:40:05 2024 00:15:58.661 read: IOPS=108, BW=435KiB/s (446kB/s)(436KiB/1002msec) 00:15:58.661 slat (nsec): min=6601, max=45595, avg=24618.41, stdev=6264.96 00:15:58.661 clat (usec): min=300, max=42145, avg=7080.43, stdev=14757.49 00:15:58.661 lat (usec): min=307, max=42172, avg=7105.05, stdev=14758.37 00:15:58.661 clat percentiles (usec): 00:15:58.661 | 1.00th=[ 453], 5.00th=[ 502], 10.00th=[ 586], 20.00th=[ 652], 00:15:58.661 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[ 799], 60.00th=[ 865], 00:15:58.661 | 70.00th=[ 906], 80.00th=[ 1004], 90.00th=[41157], 95.00th=[41157], 00:15:58.661 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:58.661 | 99.99th=[42206] 00:15:58.661 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:58.661 slat (nsec): min=9561, max=51116, avg=28301.96, stdev=10330.66 00:15:58.661 clat (usec): min=183, max=690, avg=405.96, stdev=110.32 00:15:58.661 lat (usec): min=193, max=738, avg=434.26, stdev=112.55 00:15:58.661 clat percentiles (usec): 00:15:58.661 | 1.00th=[ 204], 5.00th=[ 229], 10.00th=[ 269], 20.00th=[ 314], 00:15:58.661 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 400], 60.00th=[ 441], 00:15:58.661 | 70.00th=[ 465], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 603], 00:15:58.661 | 99.00th=[ 652], 99.50th=[ 685], 99.90th=[ 693], 99.95th=[ 693], 00:15:58.661 | 99.99th=[ 693] 00:15:58.661 bw ( KiB/s): min= 4096, max= 4096, per=35.98%, avg=4096.00, stdev= 0.00, samples=1 00:15:58.662 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:58.662 lat (usec) : 250=6.60%, 500=58.94%, 750=23.83%, 1000=7.09% 00:15:58.662 lat (msec) : 2=0.81%, 50=2.74% 00:15:58.662 cpu : usr=0.90%, sys=1.70%, ctx=625, majf=0, minf=1 00:15:58.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 issued rwts: total=109,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.662 job1: (groupid=0, jobs=1): err= 0: pid=3036544: Tue Nov 5 16:40:05 2024 00:15:58.662 read: IOPS=511, BW=2045KiB/s (2095kB/s)(2068KiB/1011msec) 00:15:58.662 slat (nsec): min=6335, max=63011, avg=25146.19, stdev=6585.68 00:15:58.662 clat (usec): min=342, max=41931, avg=997.74, stdev=3597.55 00:15:58.662 lat (usec): min=370, max=41958, avg=1022.88, stdev=3597.70 00:15:58.662 clat percentiles (usec): 00:15:58.662 | 1.00th=[ 367], 5.00th=[ 465], 10.00th=[ 529], 20.00th=[ 570], 00:15:58.662 | 30.00th=[ 627], 40.00th=[ 660], 50.00th=[ 685], 60.00th=[ 717], 00:15:58.662 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 873], 00:15:58.662 | 99.00th=[ 1172], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:58.662 | 99.99th=[41681] 00:15:58.662 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:15:58.662 slat (nsec): min=8610, max=67752, avg=28202.89, stdev=10132.53 00:15:58.662 clat (usec): min=124, max=957, avg=431.20, stdev=143.55 00:15:58.662 lat (usec): min=158, max=989, avg=459.41, stdev=145.62 00:15:58.662 clat percentiles (usec): 00:15:58.662 | 1.00th=[ 182], 5.00th=[ 227], 10.00th=[ 269], 20.00th=[ 293], 00:15:58.662 | 30.00th=[ 326], 40.00th=[ 383], 50.00th=[ 424], 60.00th=[ 457], 00:15:58.662 | 70.00th=[ 502], 80.00th=[ 553], 90.00th=[ 635], 95.00th=[ 693], 00:15:58.662 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 914], 99.95th=[ 955], 00:15:58.662 | 99.99th=[ 955] 00:15:58.662 bw ( KiB/s): min= 4096, max= 4096, per=35.98%, avg=4096.00, stdev= 0.00, samples=2 00:15:58.662 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:15:58.662 lat (usec) : 250=5.19%, 500=43.48%, 750=39.78%, 1000=10.97% 00:15:58.662 lat (msec) : 2=0.32%, 50=0.26% 00:15:58.662 cpu : usr=2.97%, sys=5.64%, ctx=1542, majf=0, minf=1 00:15:58.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 issued rwts: total=517,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.662 job2: (groupid=0, jobs=1): err= 0: pid=3036546: Tue Nov 5 16:40:05 2024 00:15:58.662 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:15:58.662 slat (nsec): min=3556, max=57963, avg=25993.37, stdev=4317.58 00:15:58.662 clat (usec): min=544, max=1426, avg=1036.36, stdev=100.75 00:15:58.662 lat (usec): min=570, max=1452, avg=1062.36, stdev=101.12 00:15:58.662 clat percentiles (usec): 00:15:58.662 | 1.00th=[ 725], 5.00th=[ 865], 10.00th=[ 930], 20.00th=[ 971], 00:15:58.662 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:15:58.662 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:15:58.662 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1434], 99.95th=[ 1434], 00:15:58.662 | 99.99th=[ 1434] 00:15:58.662 write: IOPS=687, BW=2749KiB/s (2815kB/s)(2752KiB/1001msec); 0 zone resets 00:15:58.662 slat (nsec): min=3877, max=61236, avg=29101.82, stdev=9174.11 00:15:58.662 clat (usec): min=214, max=1017, avg=620.29, stdev=128.18 00:15:58.662 lat (usec): min=223, max=1050, avg=649.40, stdev=131.01 00:15:58.662 clat percentiles (usec): 00:15:58.662 | 1.00th=[ 289], 5.00th=[ 388], 10.00th=[ 457], 20.00th=[ 510], 00:15:58.662 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:15:58.662 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 816], 00:15:58.662 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 1020], 99.95th=[ 1020], 00:15:58.662 | 99.99th=[ 1020] 00:15:58.662 bw ( KiB/s): min= 4096, max= 4096, per=35.98%, avg=4096.00, stdev= 0.00, samples=1 00:15:58.662 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:58.662 lat (usec) : 250=0.25%, 500=10.25%, 750=40.42%, 1000=18.08% 00:15:58.662 lat (msec) : 2=31.00% 00:15:58.662 cpu : usr=1.50%, sys=3.80%, ctx=1200, majf=0, minf=2 00:15:58.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 issued rwts: total=512,688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.662 job3: (groupid=0, jobs=1): err= 0: pid=3036550: Tue Nov 5 16:40:05 2024 00:15:58.662 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:15:58.662 slat (nsec): min=26241, max=61233, avg=27628.32, stdev=3124.86 00:15:58.662 clat (usec): min=766, max=1250, avg=1033.52, stdev=81.15 00:15:58.662 lat (usec): min=793, max=1277, avg=1061.14, stdev=80.94 00:15:58.662 clat percentiles (usec): 00:15:58.662 | 1.00th=[ 816], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 971], 00:15:58.662 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:15:58.662 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1156], 00:15:58.662 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:15:58.662 | 99.99th=[ 1254] 00:15:58.662 write: IOPS=652, BW=2609KiB/s (2672kB/s)(2612KiB/1001msec); 0 zone resets 00:15:58.662 slat (nsec): min=9937, max=57378, avg=31274.35, stdev=9007.99 00:15:58.662 clat (usec): min=240, max=1045, avg=653.23, stdev=127.74 00:15:58.662 lat (usec): min=251, max=1079, avg=684.51, stdev=130.97 00:15:58.662 clat percentiles (usec): 00:15:58.662 | 1.00th=[ 310], 5.00th=[ 429], 10.00th=[ 474], 20.00th=[ 545], 00:15:58.662 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 693], 00:15:58.662 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 807], 95.00th=[ 857], 00:15:58.662 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 1045], 99.95th=[ 1045], 00:15:58.662 | 99.99th=[ 1045] 00:15:58.662 bw ( KiB/s): min= 4096, max= 4096, per=35.98%, avg=4096.00, stdev= 0.00, samples=1 00:15:58.662 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:58.662 lat (usec) : 250=0.09%, 500=7.12%, 750=37.34%, 1000=25.84% 00:15:58.662 lat (msec) : 2=29.61% 00:15:58.662 cpu : usr=2.10%, sys=3.20%, ctx=1168, majf=0, minf=1 00:15:58.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.662 issued rwts: total=512,653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.662 00:15:58.662 Run status group 0 (all jobs): 00:15:58.662 READ: bw=6528KiB/s (6685kB/s), 435KiB/s-2046KiB/s (446kB/s-2095kB/s), io=6600KiB (6758kB), run=1001-1011msec 00:15:58.662 WRITE: bw=11.1MiB/s (11.7MB/s), 2044KiB/s-4051KiB/s (2093kB/s-4149kB/s), io=11.2MiB (11.8MB), run=1001-1011msec 00:15:58.662 00:15:58.662 Disk stats (read/write): 00:15:58.662 nvme0n1: ios=156/512, merge=0/0, ticks=1084/201, in_queue=1285, util=96.89% 00:15:58.662 nvme0n2: ios=545/1015, merge=0/0, ticks=338/335, in_queue=673, util=86.53% 00:15:58.662 nvme0n3: ios=512/512, merge=0/0, ticks=594/316, in_queue=910, util=92.29% 00:15:58.662 nvme0n4: ios=505/512, merge=0/0, ticks=928/324, in_queue=1252, util=97.11% 00:15:58.662 16:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:58.662 [global] 00:15:58.662 thread=1 00:15:58.662 invalidate=1 00:15:58.662 rw=write 00:15:58.662 time_based=1 00:15:58.662 runtime=1 00:15:58.662 ioengine=libaio 00:15:58.662 direct=1 00:15:58.662 bs=4096 00:15:58.662 iodepth=128 00:15:58.662 norandommap=0 00:15:58.662 numjobs=1 00:15:58.662 00:15:58.662 verify_dump=1 00:15:58.662 verify_backlog=512 00:15:58.662 verify_state_save=0 00:15:58.662 do_verify=1 00:15:58.662 verify=crc32c-intel 00:15:58.662 [job0] 00:15:58.662 filename=/dev/nvme0n1 00:15:58.662 [job1] 00:15:58.662 filename=/dev/nvme0n2 00:15:58.662 [job2] 00:15:58.662 filename=/dev/nvme0n3 00:15:58.662 [job3] 00:15:58.662 filename=/dev/nvme0n4 00:15:58.662 Could not set queue depth (nvme0n1) 00:15:58.662 Could not set queue depth (nvme0n2) 00:15:58.662 Could not set queue depth (nvme0n3) 00:15:58.662 Could not set queue depth (nvme0n4) 00:15:58.925 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.925 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.925 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.925 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.925 fio-3.35 00:15:58.925 Starting 4 threads 00:16:00.331 00:16:00.331 job0: (groupid=0, jobs=1): err= 0: pid=3037073: Tue Nov 5 16:40:07 2024 00:16:00.331 read: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1010msec) 00:16:00.331 slat (nsec): min=885, max=51047k, avg=120922.66, stdev=1173281.78 00:16:00.331 clat (usec): min=3881, max=88527, avg=14053.03, stdev=16642.26 00:16:00.331 lat (usec): min=5225, max=88536, avg=14173.95, stdev=16755.44 00:16:00.331 clat percentiles (usec): 00:16:00.331 | 1.00th=[ 5932], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7373], 00:16:00.331 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8586], 00:16:00.331 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[39584], 95.00th=[55837], 00:16:00.331 | 99.00th=[84411], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:16:00.331 | 99.99th=[88605] 00:16:00.331 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:16:00.331 slat (nsec): min=1531, max=14223k, avg=106236.31, stdev=632231.91 00:16:00.331 clat (usec): min=4984, max=81972, avg=14671.72, stdev=16967.21 00:16:00.331 lat (usec): min=4987, max=81978, avg=14777.95, stdev=17063.01 00:16:00.331 clat percentiles (usec): 00:16:00.331 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6325], 00:16:00.331 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7373], 00:16:00.331 | 70.00th=[ 8586], 80.00th=[17957], 90.00th=[38536], 95.00th=[55313], 00:16:00.331 | 99.00th=[80217], 99.50th=[80217], 99.90th=[82314], 99.95th=[82314], 00:16:00.331 | 99.99th=[82314] 00:16:00.331 bw ( KiB/s): min= 6144, max=30208, per=33.52%, avg=18176.00, stdev=17015.82, samples=2 00:16:00.331 iops : min= 1536, max= 7552, avg=4544.00, stdev=4253.95, samples=2 00:16:00.331 lat (msec) : 4=0.01%, 10=77.94%, 20=6.03%, 50=9.44%, 100=6.57% 00:16:00.331 cpu : usr=1.49%, sys=3.07%, ctx=515, majf=0, minf=1 00:16:00.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:00.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.331 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.332 job1: (groupid=0, jobs=1): err= 0: pid=3037074: Tue Nov 5 16:40:07 2024 00:16:00.332 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:16:00.332 slat (nsec): min=1143, max=21388k, avg=171119.15, stdev=1136213.12 00:16:00.332 clat (usec): min=9006, max=61256, avg=22731.58, stdev=10467.01 00:16:00.332 lat (usec): min=9008, max=62455, avg=22902.70, stdev=10565.63 00:16:00.332 clat percentiles (usec): 00:16:00.332 | 1.00th=[10814], 5.00th=[12256], 10.00th=[12911], 20.00th=[14353], 00:16:00.332 | 30.00th=[14877], 40.00th=[16909], 50.00th=[19268], 60.00th=[21365], 00:16:00.332 | 70.00th=[28181], 80.00th=[32637], 90.00th=[35914], 95.00th=[43779], 00:16:00.332 | 99.00th=[54789], 99.50th=[55837], 99.90th=[56886], 99.95th=[57934], 00:16:00.332 | 99.99th=[61080] 00:16:00.332 write: IOPS=3180, BW=12.4MiB/s (13.0MB/s)(12.6MiB/1011msec); 0 zone resets 00:16:00.332 slat (nsec): min=1589, max=12490k, avg=142143.33, stdev=787608.01 00:16:00.332 clat (usec): min=2920, max=52472, avg=17914.51, stdev=12224.72 00:16:00.332 lat (usec): min=2930, max=54721, avg=18056.66, stdev=12318.95 00:16:00.332 clat percentiles (usec): 00:16:00.332 | 1.00th=[ 4293], 5.00th=[ 4752], 10.00th=[ 7046], 20.00th=[10028], 00:16:00.332 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12911], 60.00th=[13698], 00:16:00.332 | 70.00th=[17171], 80.00th=[26608], 90.00th=[40633], 95.00th=[44827], 00:16:00.332 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:16:00.332 | 99.99th=[52691] 00:16:00.332 bw ( KiB/s): min=10912, max=13784, per=22.77%, avg=12348.00, stdev=2030.81, samples=2 00:16:00.332 iops : min= 2728, max= 3446, avg=3087.00, stdev=507.70, samples=2 00:16:00.332 lat (msec) : 4=0.45%, 10=9.81%, 20=54.94%, 50=32.75%, 100=2.05% 00:16:00.332 cpu : usr=2.38%, sys=3.37%, ctx=249, majf=0, minf=1 00:16:00.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:00.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.332 issued rwts: total=3072,3215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.332 job2: (groupid=0, jobs=1): err= 0: pid=3037075: Tue Nov 5 16:40:07 2024 00:16:00.332 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:16:00.332 slat (nsec): min=947, max=20106k, avg=190193.71, stdev=1132671.16 00:16:00.332 clat (usec): min=11827, max=69785, avg=23153.94, stdev=10053.55 00:16:00.332 lat (usec): min=11834, max=69793, avg=23344.14, stdev=10171.07 00:16:00.332 clat percentiles (usec): 00:16:00.332 | 1.00th=[13435], 5.00th=[14091], 10.00th=[14484], 20.00th=[15270], 00:16:00.332 | 30.00th=[16450], 40.00th=[18220], 50.00th=[19530], 60.00th=[22152], 00:16:00.332 | 70.00th=[25297], 80.00th=[28181], 90.00th=[39060], 95.00th=[45876], 00:16:00.332 | 99.00th=[64226], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:16:00.332 | 99.99th=[69731] 00:16:00.332 write: IOPS=2525, BW=9.86MiB/s (10.3MB/s)(9.91MiB/1005msec); 0 zone resets 00:16:00.332 slat (nsec): min=1675, max=11920k, avg=234783.35, stdev=1028390.95 00:16:00.332 clat (msec): min=4, max=107, avg=31.49, stdev=22.83 00:16:00.332 lat (msec): min=4, max=107, avg=31.73, stdev=22.97 00:16:00.332 clat percentiles (msec): 00:16:00.332 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:16:00.332 | 30.00th=[ 13], 40.00th=[ 17], 50.00th=[ 28], 60.00th=[ 35], 00:16:00.332 | 70.00th=[ 40], 80.00th=[ 47], 90.00th=[ 63], 95.00th=[ 82], 00:16:00.332 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 108], 99.95th=[ 108], 00:16:00.332 | 99.99th=[ 108] 00:16:00.332 bw ( KiB/s): min= 9504, max= 9784, per=17.79%, avg=9644.00, stdev=197.99, samples=2 00:16:00.332 iops : min= 2376, max= 2446, avg=2411.00, stdev=49.50, samples=2 00:16:00.332 lat (msec) : 10=3.12%, 20=44.96%, 50=42.96%, 100=7.68%, 250=1.29% 00:16:00.332 cpu : usr=1.29%, sys=3.69%, ctx=278, majf=0, minf=1 00:16:00.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:00.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.332 issued rwts: total=2048,2538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.332 job3: (groupid=0, jobs=1): err= 0: pid=3037076: Tue Nov 5 16:40:07 2024 00:16:00.332 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:16:00.332 slat (nsec): min=1073, max=13634k, avg=131279.94, stdev=857949.96 00:16:00.332 clat (usec): min=3647, max=61886, avg=14710.68, stdev=8958.66 00:16:00.332 lat (usec): min=5019, max=61892, avg=14841.96, stdev=9051.06 00:16:00.332 clat percentiles (usec): 00:16:00.332 | 1.00th=[ 6718], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[10159], 00:16:00.332 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:16:00.332 | 70.00th=[13435], 80.00th=[14222], 90.00th=[22938], 95.00th=[38011], 00:16:00.332 | 99.00th=[52167], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:16:00.332 | 99.99th=[62129] 00:16:00.332 write: IOPS=3347, BW=13.1MiB/s (13.7MB/s)(13.3MiB/1015msec); 0 zone resets 00:16:00.332 slat (nsec): min=1789, max=10309k, avg=167836.91, stdev=753052.98 00:16:00.332 clat (usec): min=1495, max=62681, avg=24592.05, stdev=16574.48 00:16:00.332 lat (usec): min=1506, max=62689, avg=24759.89, stdev=16688.41 00:16:00.332 clat percentiles (usec): 00:16:00.332 | 1.00th=[ 3294], 5.00th=[ 5342], 10.00th=[ 6652], 20.00th=[ 8225], 00:16:00.332 | 30.00th=[ 9896], 40.00th=[12911], 50.00th=[16450], 60.00th=[33162], 00:16:00.332 | 70.00th=[38536], 80.00th=[42730], 90.00th=[46924], 95.00th=[50594], 00:16:00.332 | 99.00th=[55313], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 00:16:00.332 | 99.99th=[62653] 00:16:00.332 bw ( KiB/s): min= 9216, max=16944, per=24.12%, avg=13080.00, stdev=5464.52, samples=2 00:16:00.332 iops : min= 2304, max= 4236, avg=3270.00, stdev=1366.13, samples=2 00:16:00.332 lat (msec) : 2=0.20%, 4=0.68%, 10=22.86%, 20=45.80%, 50=26.62% 00:16:00.332 lat (msec) : 100=3.85% 00:16:00.332 cpu : usr=2.27%, sys=3.94%, ctx=326, majf=0, minf=2 00:16:00.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:00.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.332 issued rwts: total=3072,3398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.332 00:16:00.332 Run status group 0 (all jobs): 00:16:00.332 READ: bw=47.5MiB/s (49.8MB/s), 8151KiB/s-16.1MiB/s (8347kB/s-16.9MB/s), io=48.2MiB (50.6MB), run=1005-1015msec 00:16:00.332 WRITE: bw=53.0MiB/s (55.5MB/s), 9.86MiB/s-17.8MiB/s (10.3MB/s-18.7MB/s), io=53.7MiB (56.4MB), run=1005-1015msec 00:16:00.332 00:16:00.332 Disk stats (read/write): 00:16:00.332 nvme0n1: ios=4146/4287, merge=0/0, ticks=15180/10771, in_queue=25951, util=96.19% 00:16:00.332 nvme0n2: ios=2593/2615, merge=0/0, ticks=27316/23645, in_queue=50961, util=86.54% 00:16:00.332 nvme0n3: ios=1536/1999, merge=0/0, ticks=16485/34855, in_queue=51340, util=88.29% 00:16:00.332 nvme0n4: ios=2560/2983, merge=0/0, ticks=34559/67300, in_queue=101859, util=89.42% 00:16:00.332 16:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:00.332 [global] 00:16:00.332 thread=1 00:16:00.332 invalidate=1 00:16:00.332 rw=randwrite 00:16:00.332 time_based=1 00:16:00.332 runtime=1 00:16:00.332 ioengine=libaio 00:16:00.332 direct=1 00:16:00.332 bs=4096 00:16:00.332 iodepth=128 00:16:00.332 norandommap=0 00:16:00.332 numjobs=1 00:16:00.332 00:16:00.332 verify_dump=1 00:16:00.332 verify_backlog=512 00:16:00.332 verify_state_save=0 00:16:00.332 do_verify=1 00:16:00.332 verify=crc32c-intel 00:16:00.332 [job0] 00:16:00.332 filename=/dev/nvme0n1 00:16:00.332 [job1] 00:16:00.332 filename=/dev/nvme0n2 00:16:00.332 [job2] 00:16:00.332 filename=/dev/nvme0n3 00:16:00.332 [job3] 00:16:00.332 filename=/dev/nvme0n4 00:16:00.332 Could not set queue depth (nvme0n1) 00:16:00.332 Could not set queue depth (nvme0n2) 00:16:00.332 Could not set queue depth (nvme0n3) 00:16:00.332 Could not set queue depth (nvme0n4) 00:16:00.596 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.596 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.596 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.596 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.596 fio-3.35 00:16:00.596 Starting 4 threads 00:16:01.998 00:16:01.998 job0: (groupid=0, jobs=1): err= 0: pid=3037597: Tue Nov 5 16:40:08 2024 00:16:01.998 read: IOPS=8464, BW=33.1MiB/s (34.7MB/s)(33.2MiB/1005msec) 00:16:01.998 slat (nsec): min=896, max=8178.0k, avg=61572.04, stdev=415301.13 00:16:01.998 clat (usec): min=849, max=45929, avg=7629.01, stdev=2592.20 00:16:01.998 lat (usec): min=2418, max=45937, avg=7690.58, stdev=2617.60 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6652], 00:16:01.998 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:16:01.998 | 70.00th=[ 7767], 80.00th=[ 8160], 90.00th=[ 8979], 95.00th=[10552], 00:16:01.998 | 99.00th=[16909], 99.50th=[17433], 99.90th=[45351], 99.95th=[45876], 00:16:01.998 | 99.99th=[45876] 00:16:01.998 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:16:01.998 slat (nsec): min=1491, max=9685.6k, avg=50461.06, stdev=276324.14 00:16:01.998 clat (usec): min=1127, max=34116, avg=7183.35, stdev=2697.78 00:16:01.998 lat (usec): min=1138, max=34122, avg=7233.82, stdev=2706.61 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 2999], 5.00th=[ 4359], 10.00th=[ 5342], 20.00th=[ 6521], 00:16:01.998 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:16:01.998 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 8455], 95.00th=[ 9503], 00:16:01.998 | 99.00th=[20055], 99.50th=[27919], 99.90th=[33817], 99.95th=[34341], 00:16:01.998 | 99.99th=[34341] 00:16:01.998 bw ( KiB/s): min=33848, max=35784, per=36.92%, avg=34816.00, stdev=1368.96, samples=2 00:16:01.998 iops : min= 8462, max= 8946, avg=8704.00, stdev=342.24, samples=2 00:16:01.998 lat (usec) : 1000=0.01% 00:16:01.998 lat (msec) : 2=0.17%, 4=1.44%, 10=93.25%, 20=4.45%, 50=0.69% 00:16:01.998 cpu : usr=4.98%, sys=7.37%, ctx=973, majf=0, minf=1 00:16:01.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:01.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.998 issued rwts: total=8507,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.998 job1: (groupid=0, jobs=1): err= 0: pid=3037598: Tue Nov 5 16:40:08 2024 00:16:01.998 read: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1003msec) 00:16:01.998 slat (nsec): min=957, max=44295k, avg=156171.32, stdev=1054342.49 00:16:01.998 clat (usec): min=787, max=68252, avg=19789.06, stdev=10544.32 00:16:01.998 lat (usec): min=2859, max=68259, avg=19945.23, stdev=10573.39 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 5997], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10421], 00:16:01.998 | 30.00th=[15533], 40.00th=[19530], 50.00th=[20841], 60.00th=[21365], 00:16:01.998 | 70.00th=[21365], 80.00th=[22152], 90.00th=[26346], 95.00th=[31589], 00:16:01.998 | 99.00th=[65274], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:16:01.998 | 99.99th=[68682] 00:16:01.998 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:16:01.998 slat (nsec): min=1577, max=14189k, avg=117264.44, stdev=661831.10 00:16:01.998 clat (usec): min=3707, max=30134, avg=15778.22, stdev=5715.31 00:16:01.998 lat (usec): min=3717, max=30138, avg=15895.49, stdev=5743.24 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 3818], 5.00th=[ 6128], 10.00th=[ 7767], 20.00th=[ 9765], 00:16:01.998 | 30.00th=[12649], 40.00th=[15139], 50.00th=[15664], 60.00th=[17433], 00:16:01.998 | 70.00th=[20055], 80.00th=[20317], 90.00th=[22676], 95.00th=[24773], 00:16:01.998 | 99.00th=[27919], 99.50th=[27919], 99.90th=[30016], 99.95th=[30016], 00:16:01.998 | 99.99th=[30016] 00:16:01.998 bw ( KiB/s): min=13440, max=15232, per=15.20%, avg=14336.00, stdev=1267.14, samples=2 00:16:01.998 iops : min= 3360, max= 3808, avg=3584.00, stdev=316.78, samples=2 00:16:01.998 lat (usec) : 1000=0.01% 00:16:01.998 lat (msec) : 4=1.36%, 10=18.11%, 20=36.17%, 50=42.58%, 100=1.78% 00:16:01.998 cpu : usr=3.09%, sys=3.99%, ctx=341, majf=0, minf=1 00:16:01.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:01.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.998 issued rwts: total=3561,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.998 job2: (groupid=0, jobs=1): err= 0: pid=3037599: Tue Nov 5 16:40:08 2024 00:16:01.998 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:16:01.998 slat (nsec): min=982, max=7493.6k, avg=143801.07, stdev=758004.41 00:16:01.998 clat (usec): min=7031, max=31503, avg=18500.85, stdev=4675.73 00:16:01.998 lat (usec): min=7036, max=31511, avg=18644.65, stdev=4662.96 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11994], 20.00th=[13698], 00:16:01.998 | 30.00th=[14877], 40.00th=[18220], 50.00th=[19792], 60.00th=[20841], 00:16:01.998 | 70.00th=[21365], 80.00th=[21627], 90.00th=[23200], 95.00th=[25822], 00:16:01.998 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[31589], 00:16:01.998 | 99.99th=[31589] 00:16:01.998 write: IOPS=3629, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1004msec); 0 zone resets 00:16:01.998 slat (nsec): min=1699, max=5381.3k, avg=127266.07, stdev=621920.39 00:16:01.998 clat (usec): min=583, max=31551, avg=16578.59, stdev=4391.20 00:16:01.998 lat (usec): min=3377, max=31559, avg=16705.86, stdev=4385.07 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 6718], 5.00th=[10290], 10.00th=[11863], 20.00th=[12780], 00:16:01.998 | 30.00th=[14091], 40.00th=[15533], 50.00th=[15926], 60.00th=[16909], 00:16:01.998 | 70.00th=[19006], 80.00th=[20055], 90.00th=[21103], 95.00th=[23462], 00:16:01.998 | 99.00th=[29230], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:16:01.998 | 99.99th=[31589] 00:16:01.998 bw ( KiB/s): min=13376, max=15296, per=15.20%, avg=14336.00, stdev=1357.65, samples=2 00:16:01.998 iops : min= 3344, max= 3824, avg=3584.00, stdev=339.41, samples=2 00:16:01.998 lat (usec) : 750=0.01% 00:16:01.998 lat (msec) : 4=0.44%, 10=3.21%, 20=60.35%, 50=35.99% 00:16:01.998 cpu : usr=2.79%, sys=3.99%, ctx=355, majf=0, minf=1 00:16:01.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:01.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.998 issued rwts: total=3584,3644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.998 job3: (groupid=0, jobs=1): err= 0: pid=3037600: Tue Nov 5 16:40:08 2024 00:16:01.998 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:16:01.998 slat (nsec): min=915, max=6799.2k, avg=65705.32, stdev=453218.84 00:16:01.998 clat (usec): min=2034, max=14800, avg=8372.14, stdev=1273.89 00:16:01.998 lat (usec): min=2037, max=14827, avg=8437.84, stdev=1317.93 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 5276], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7701], 00:16:01.998 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8356], 00:16:01.998 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[10683], 00:16:01.998 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13829], 99.95th=[14091], 00:16:01.998 | 99.99th=[14746] 00:16:01.998 write: IOPS=7737, BW=30.2MiB/s (31.7MB/s)(30.3MiB/1003msec); 0 zone resets 00:16:01.998 slat (nsec): min=1535, max=7947.2k, avg=59308.19, stdev=372807.26 00:16:01.998 clat (usec): min=701, max=35288, avg=8108.03, stdev=3255.68 00:16:01.998 lat (usec): min=1203, max=35298, avg=8167.34, stdev=3278.46 00:16:01.998 clat percentiles (usec): 00:16:01.998 | 1.00th=[ 3818], 5.00th=[ 4817], 10.00th=[ 5407], 20.00th=[ 7177], 00:16:01.998 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7898], 00:16:01.998 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[10159], 95.00th=[10814], 00:16:01.998 | 99.00th=[29492], 99.50th=[30016], 99.90th=[34866], 99.95th=[34866], 00:16:01.998 | 99.99th=[35390] 00:16:01.998 bw ( KiB/s): min=29240, max=32248, per=32.60%, avg=30744.00, stdev=2126.98, samples=2 00:16:01.998 iops : min= 7310, max= 8062, avg=7686.00, stdev=531.74, samples=2 00:16:01.998 lat (usec) : 750=0.01% 00:16:01.998 lat (msec) : 2=0.15%, 4=0.56%, 10=89.47%, 20=9.03%, 50=0.78% 00:16:01.998 cpu : usr=4.49%, sys=7.29%, ctx=729, majf=0, minf=1 00:16:01.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:01.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.999 issued rwts: total=7680,7761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.999 00:16:01.999 Run status group 0 (all jobs): 00:16:01.999 READ: bw=90.7MiB/s (95.1MB/s), 13.9MiB/s-33.1MiB/s (14.5MB/s-34.7MB/s), io=91.1MiB (95.6MB), run=1003-1005msec 00:16:01.999 WRITE: bw=92.1MiB/s (96.6MB/s), 14.0MiB/s-33.8MiB/s (14.6MB/s-35.5MB/s), io=92.6MiB (97.0MB), run=1003-1005msec 00:16:01.999 00:16:01.999 Disk stats (read/write): 00:16:01.999 nvme0n1: ios=7218/7239, merge=0/0, ticks=34948/34754, in_queue=69702, util=87.68% 00:16:01.999 nvme0n2: ios=2592/2848, merge=0/0, ticks=16300/16313, in_queue=32613, util=97.86% 00:16:01.999 nvme0n3: ios=2636/3072, merge=0/0, ticks=13652/13041, in_queue=26693, util=97.15% 00:16:01.999 nvme0n4: ios=6333/6656, merge=0/0, ticks=35262/33798, in_queue=69060, util=88.69% 00:16:01.999 16:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:01.999 16:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3037934 00:16:01.999 16:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:01.999 16:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:01.999 [global] 00:16:01.999 thread=1 00:16:01.999 invalidate=1 00:16:01.999 rw=read 00:16:01.999 time_based=1 00:16:01.999 runtime=10 00:16:01.999 ioengine=libaio 00:16:01.999 direct=1 00:16:01.999 bs=4096 00:16:01.999 iodepth=1 00:16:01.999 norandommap=1 00:16:01.999 numjobs=1 00:16:01.999 00:16:01.999 [job0] 00:16:01.999 filename=/dev/nvme0n1 00:16:01.999 [job1] 00:16:01.999 filename=/dev/nvme0n2 00:16:01.999 [job2] 00:16:01.999 filename=/dev/nvme0n3 00:16:01.999 [job3] 00:16:01.999 filename=/dev/nvme0n4 00:16:01.999 Could not set queue depth (nvme0n1) 00:16:01.999 Could not set queue depth (nvme0n2) 00:16:01.999 Could not set queue depth (nvme0n3) 00:16:01.999 Could not set queue depth (nvme0n4) 00:16:02.261 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.261 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.261 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.261 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.261 fio-3.35 00:16:02.261 Starting 4 threads 00:16:04.800 16:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:05.060 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=6791168, buflen=4096 00:16:05.060 fio: pid=3038127, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:05.060 16:40:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:05.060 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7290880, buflen=4096 00:16:05.060 fio: pid=3038126, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:05.060 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.060 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:05.319 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.319 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:05.319 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2285568, buflen=4096 00:16:05.319 fio: pid=3038124, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:05.578 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.578 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:05.578 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3325952, buflen=4096 00:16:05.578 fio: pid=3038125, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:05.579 00:16:05.579 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3038124: Tue Nov 5 16:40:12 2024 00:16:05.579 read: IOPS=190, BW=761KiB/s (779kB/s)(2232KiB/2933msec) 00:16:05.579 slat (usec): min=7, max=18583, avg=58.84, stdev=784.98 00:16:05.579 clat (usec): min=478, max=44334, avg=5151.09, stdev=12304.81 00:16:05.579 lat (usec): min=504, max=60019, avg=5209.98, stdev=12428.42 00:16:05.579 clat percentiles (usec): 00:16:05.579 | 1.00th=[ 611], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 914], 00:16:05.579 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:16:05.579 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[40633], 95.00th=[41157], 00:16:05.579 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:16:05.579 | 99.99th=[44303] 00:16:05.579 bw ( KiB/s): min= 112, max= 3272, per=13.99%, avg=872.00, stdev=1346.25, samples=5 00:16:05.579 iops : min= 28, max= 818, avg=218.00, stdev=336.56, samples=5 00:16:05.579 lat (usec) : 500=0.18%, 750=1.61%, 1000=57.96% 00:16:05.579 lat (msec) : 2=29.70%, 50=10.38% 00:16:05.579 cpu : usr=0.17%, sys=0.58%, ctx=561, majf=0, minf=1 00:16:05.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.579 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3038125: Tue Nov 5 16:40:12 2024 00:16:05.579 read: IOPS=263, BW=1053KiB/s (1078kB/s)(3248KiB/3085msec) 00:16:05.579 slat (usec): min=6, max=20537, avg=67.53, stdev=881.91 00:16:05.579 clat (usec): min=194, max=42940, avg=3713.18, stdev=10883.85 00:16:05.579 lat (usec): min=201, max=61967, avg=3780.76, stdev=11068.39 00:16:05.579 clat percentiles (usec): 00:16:05.579 | 1.00th=[ 273], 5.00th=[ 371], 10.00th=[ 424], 20.00th=[ 502], 00:16:05.579 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 668], 00:16:05.579 | 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 889], 95.00th=[41681], 00:16:05.579 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:05.579 | 99.99th=[42730] 00:16:05.579 bw ( KiB/s): min= 85, max= 2944, per=17.31%, avg=1079.50, stdev=1302.37, samples=6 00:16:05.579 iops : min= 21, max= 736, avg=269.83, stdev=325.63, samples=6 00:16:05.579 lat (usec) : 250=0.62%, 500=19.19%, 750=58.18%, 1000=13.28% 00:16:05.579 lat (msec) : 2=0.98%, 10=0.12%, 50=7.50% 00:16:05.579 cpu : usr=0.42%, sys=0.91%, ctx=815, majf=0, minf=2 00:16:05.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 issued rwts: total=813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.579 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3038126: Tue Nov 5 16:40:12 2024 00:16:05.579 read: IOPS=657, BW=2629KiB/s (2692kB/s)(7120KiB/2708msec) 00:16:05.579 slat (nsec): min=8916, max=60782, avg=25610.37, stdev=2758.15 00:16:05.579 clat (usec): min=637, max=42927, avg=1476.87, stdev=3982.14 00:16:05.579 lat (usec): min=663, max=42953, avg=1502.48, stdev=3982.19 00:16:05.579 clat percentiles (usec): 00:16:05.579 | 1.00th=[ 791], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1020], 00:16:05.579 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:16:05.579 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1237], 00:16:05.579 | 99.00th=[ 1336], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:05.579 | 99.99th=[42730] 00:16:05.579 bw ( KiB/s): min= 488, max= 3720, per=41.45%, avg=2584.00, stdev=1430.02, samples=5 00:16:05.579 iops : min= 122, max= 930, avg=646.00, stdev=357.51, samples=5 00:16:05.579 lat (usec) : 750=0.34%, 1000=14.94% 00:16:05.579 lat (msec) : 2=83.72%, 50=0.95% 00:16:05.579 cpu : usr=0.85%, sys=1.85%, ctx=1781, majf=0, minf=2 00:16:05.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 issued rwts: total=1781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.579 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3038127: Tue Nov 5 16:40:12 2024 00:16:05.579 read: IOPS=657, BW=2629KiB/s (2692kB/s)(6632KiB/2523msec) 00:16:05.579 slat (nsec): min=6704, max=61224, avg=24268.64, stdev=6815.10 00:16:05.579 clat (usec): min=145, max=43115, avg=1478.18, stdev=5441.41 00:16:05.579 lat (usec): min=153, max=43147, avg=1502.45, stdev=5442.04 00:16:05.579 clat percentiles (usec): 00:16:05.579 | 1.00th=[ 334], 5.00th=[ 437], 10.00th=[ 474], 20.00th=[ 529], 00:16:05.579 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 824], 00:16:05.579 | 70.00th=[ 971], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1139], 00:16:05.579 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:16:05.579 | 99.99th=[43254] 00:16:05.579 bw ( KiB/s): min= 272, max= 5968, per=42.54%, avg=2652.80, stdev=2250.56, samples=5 00:16:05.579 iops : min= 68, max= 1492, avg=663.20, stdev=562.64, samples=5 00:16:05.579 lat (usec) : 250=0.60%, 500=12.36%, 750=44.91%, 1000=15.43% 00:16:05.579 lat (msec) : 2=24.77%, 50=1.87% 00:16:05.579 cpu : usr=0.59%, sys=1.98%, ctx=1659, majf=0, minf=2 00:16:05.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.579 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.579 00:16:05.579 Run status group 0 (all jobs): 00:16:05.579 READ: bw=6234KiB/s (6384kB/s), 761KiB/s-2629KiB/s (779kB/s-2692kB/s), io=18.8MiB (19.7MB), run=2523-3085msec 00:16:05.579 00:16:05.579 Disk stats (read/write): 00:16:05.579 nvme0n1: ios=554/0, merge=0/0, ticks=2692/0, in_queue=2692, util=91.79% 00:16:05.579 nvme0n2: ios=810/0, merge=0/0, ticks=2862/0, in_queue=2862, util=93.04% 00:16:05.579 nvme0n3: ios=1626/0, merge=0/0, ticks=2422/0, in_queue=2422, util=95.45% 00:16:05.579 nvme0n4: ios=1659/0, merge=0/0, ticks=2411/0, in_queue=2411, util=96.17% 00:16:05.579 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.579 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:05.838 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.838 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:06.097 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.097 16:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:06.097 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.097 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3037934 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:06.356 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:06.615 nvmf hotplug test: fio failed as expected 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:06.615 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:06.615 rmmod nvme_tcp 00:16:06.615 rmmod nvme_fabrics 00:16:06.876 rmmod nvme_keyring 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 3034270 ']' 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 3034270 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3034270 ']' 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3034270 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3034270 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3034270' 00:16:06.876 killing process with pid 3034270 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3034270 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3034270 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:06.876 16:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:09.418 16:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:16:09.418 00:16:09.418 real 0m28.971s 00:16:09.418 user 2m28.919s 00:16:09.418 sys 0m9.151s 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.418 ************************************ 00:16:09.418 END TEST nvmf_fio_target 00:16:09.418 ************************************ 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:09.418 ************************************ 00:16:09.418 START TEST nvmf_bdevio 00:16:09.418 ************************************ 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:09.418 * Looking for test storage... 00:16:09.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.418 --rc genhtml_branch_coverage=1 00:16:09.418 --rc genhtml_function_coverage=1 00:16:09.418 --rc genhtml_legend=1 00:16:09.418 --rc geninfo_all_blocks=1 00:16:09.418 --rc geninfo_unexecuted_blocks=1 00:16:09.418 00:16:09.418 ' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.418 --rc genhtml_branch_coverage=1 00:16:09.418 --rc genhtml_function_coverage=1 00:16:09.418 --rc genhtml_legend=1 00:16:09.418 --rc geninfo_all_blocks=1 00:16:09.418 --rc geninfo_unexecuted_blocks=1 00:16:09.418 00:16:09.418 ' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.418 --rc genhtml_branch_coverage=1 00:16:09.418 --rc genhtml_function_coverage=1 00:16:09.418 --rc genhtml_legend=1 00:16:09.418 --rc geninfo_all_blocks=1 00:16:09.418 --rc geninfo_unexecuted_blocks=1 00:16:09.418 00:16:09.418 ' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.418 --rc genhtml_branch_coverage=1 00:16:09.418 --rc genhtml_function_coverage=1 00:16:09.418 --rc genhtml_legend=1 00:16:09.418 --rc geninfo_all_blocks=1 00:16:09.418 --rc geninfo_unexecuted_blocks=1 00:16:09.418 00:16:09.418 ' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.418 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:09.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:16:09.419 16:40:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:17.555 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:17.555 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:17.555 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:17.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:17.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:17.556 10.0.0.1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:17.556 10.0.0.2 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:17.556 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:17.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.678 ms 00:16:17.557 00:16:17.557 --- 10.0.0.1 ping statistics --- 00:16:17.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.557 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:17.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:16:17.557 00:16:17.557 --- 10.0.0.2 ping statistics --- 00:16:17.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.557 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:17.557 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:16:17.558 ' 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=3043327 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 3043327 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3043327 ']' 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.558 16:40:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.558 [2024-11-05 16:40:23.818867] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:16:17.558 [2024-11-05 16:40:23.818919] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.558 [2024-11-05 16:40:23.916518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.558 [2024-11-05 16:40:23.969805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.558 [2024-11-05 16:40:23.969859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.558 [2024-11-05 16:40:23.969868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.558 [2024-11-05 16:40:23.969876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.558 [2024-11-05 16:40:23.969882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.558 [2024-11-05 16:40:23.972058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:17.558 [2024-11-05 16:40:23.972219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:17.558 [2024-11-05 16:40:23.972276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.558 [2024-11-05 16:40:23.972276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:17.816 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.816 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:16:17.816 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:17.816 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.817 [2024-11-05 16:40:24.691553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.817 Malloc0 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.817 [2024-11-05 16:40:24.772938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:17.817 { 00:16:17.817 "params": { 00:16:17.817 "name": "Nvme$subsystem", 00:16:17.817 "trtype": "$TEST_TRANSPORT", 00:16:17.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.817 "adrfam": "ipv4", 00:16:17.817 "trsvcid": "$NVMF_PORT", 00:16:17.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.817 "hdgst": ${hdgst:-false}, 00:16:17.817 "ddgst": ${ddgst:-false} 00:16:17.817 }, 00:16:17.817 "method": "bdev_nvme_attach_controller" 00:16:17.817 } 00:16:17.817 EOF 00:16:17.817 )") 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:16:17.817 16:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:16:17.817 "params": { 00:16:17.817 "name": "Nvme1", 00:16:17.817 "trtype": "tcp", 00:16:17.817 "traddr": "10.0.0.2", 00:16:17.817 "adrfam": "ipv4", 00:16:17.817 "trsvcid": "4420", 00:16:17.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.817 "hdgst": false, 00:16:17.817 "ddgst": false 00:16:17.817 }, 00:16:17.817 "method": "bdev_nvme_attach_controller" 00:16:17.817 }' 00:16:17.817 [2024-11-05 16:40:24.832400] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:16:17.817 [2024-11-05 16:40:24.832469] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043537 ] 00:16:18.074 [2024-11-05 16:40:24.910289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.075 [2024-11-05 16:40:24.955013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.075 [2024-11-05 16:40:24.955132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.075 [2024-11-05 16:40:24.955135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.332 I/O targets: 00:16:18.332 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:18.332 00:16:18.332 00:16:18.332 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.332 http://cunit.sourceforge.net/ 00:16:18.332 00:16:18.332 00:16:18.332 Suite: bdevio tests on: Nvme1n1 00:16:18.332 Test: blockdev write read block ...passed 00:16:18.332 Test: blockdev write zeroes read block ...passed 00:16:18.332 Test: blockdev write zeroes read no split ...passed 00:16:18.589 Test: blockdev write zeroes read split ...passed 00:16:18.589 Test: blockdev write zeroes read split partial ...passed 00:16:18.589 Test: blockdev reset ...[2024-11-05 16:40:25.476174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:18.589 [2024-11-05 16:40:25.476245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bd970 (9): Bad file descriptor 00:16:18.589 [2024-11-05 16:40:25.573419] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:18.589 passed 00:16:18.589 Test: blockdev write read 8 blocks ...passed 00:16:18.589 Test: blockdev write read size > 128k ...passed 00:16:18.589 Test: blockdev write read invalid size ...passed 00:16:18.846 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:18.846 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:18.846 Test: blockdev write read max offset ...passed 00:16:18.846 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:18.846 Test: blockdev writev readv 8 blocks ...passed 00:16:18.846 Test: blockdev writev readv 30 x 1block ...passed 00:16:18.846 Test: blockdev writev readv block ...passed 00:16:18.846 Test: blockdev writev readv size > 128k ...passed 00:16:18.846 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.846 Test: blockdev comparev and writev ...[2024-11-05 16:40:25.838918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.838941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.838953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.838959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.839464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.839472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.839482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.839488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.839963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.839971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.839981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.839986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.840476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.840484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.846 [2024-11-05 16:40:25.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.846 [2024-11-05 16:40:25.840499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.846 passed 00:16:19.103 Test: blockdev nvme passthru rw ...passed 00:16:19.103 Test: blockdev nvme passthru vendor specific ...[2024-11-05 16:40:25.925612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.103 [2024-11-05 16:40:25.925628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:19.103 [2024-11-05 16:40:25.926084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.103 [2024-11-05 16:40:25.926092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:19.103 [2024-11-05 16:40:25.926343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.103 [2024-11-05 16:40:25.926350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:19.103 [2024-11-05 16:40:25.926626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.103 [2024-11-05 16:40:25.926633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:19.103 passed 00:16:19.103 Test: blockdev nvme admin passthru ...passed 00:16:19.103 Test: blockdev copy ...passed 00:16:19.103 00:16:19.103 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.103 suites 1 1 n/a 0 0 00:16:19.103 tests 23 23 23 0 0 00:16:19.103 asserts 152 152 152 0 n/a 00:16:19.103 00:16:19.103 Elapsed time = 1.437 seconds 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:19.103 rmmod nvme_tcp 00:16:19.103 rmmod nvme_fabrics 00:16:19.103 rmmod nvme_keyring 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 3043327 ']' 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 3043327 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3043327 ']' 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3043327 00:16:19.103 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3043327 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3043327' 00:16:19.363 killing process with pid 3043327 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3043327 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3043327 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:19.363 16:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:21.905 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:21.905 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:21.906 00:16:21.906 real 0m12.365s 00:16:21.906 user 0m14.444s 00:16:21.906 sys 0m6.114s 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.906 ************************************ 00:16:21.906 END TEST nvmf_bdevio 00:16:21.906 ************************************ 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:21.906 00:16:21.906 real 5m2.623s 00:16:21.906 user 11m38.292s 00:16:21.906 sys 1m47.337s 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:21.906 ************************************ 00:16:21.906 END TEST nvmf_target_core 00:16:21.906 ************************************ 00:16:21.906 16:40:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:16:21.906 16:40:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:21.906 16:40:28 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:21.906 16:40:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.906 ************************************ 00:16:21.906 START TEST nvmf_target_extra 00:16:21.906 ************************************ 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:16:21.906 * Looking for test storage... 00:16:21.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.906 --rc genhtml_branch_coverage=1 00:16:21.906 --rc genhtml_function_coverage=1 00:16:21.906 --rc genhtml_legend=1 00:16:21.906 --rc geninfo_all_blocks=1 00:16:21.906 --rc geninfo_unexecuted_blocks=1 00:16:21.906 00:16:21.906 ' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.906 --rc genhtml_branch_coverage=1 00:16:21.906 --rc genhtml_function_coverage=1 00:16:21.906 --rc genhtml_legend=1 00:16:21.906 --rc geninfo_all_blocks=1 00:16:21.906 --rc geninfo_unexecuted_blocks=1 00:16:21.906 00:16:21.906 ' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.906 --rc genhtml_branch_coverage=1 00:16:21.906 --rc genhtml_function_coverage=1 00:16:21.906 --rc genhtml_legend=1 00:16:21.906 --rc geninfo_all_blocks=1 00:16:21.906 --rc geninfo_unexecuted_blocks=1 00:16:21.906 00:16:21.906 ' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.906 --rc genhtml_branch_coverage=1 00:16:21.906 --rc genhtml_function_coverage=1 00:16:21.906 --rc genhtml_legend=1 00:16:21.906 --rc geninfo_all_blocks=1 00:16:21.906 --rc geninfo_unexecuted_blocks=1 00:16:21.906 00:16:21.906 ' 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.906 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:21.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.907 ************************************ 00:16:21.907 START TEST nvmf_example 00:16:21.907 ************************************ 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:21.907 * Looking for test storage... 00:16:21.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:16:21.907 16:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.169 --rc genhtml_branch_coverage=1 00:16:22.169 --rc genhtml_function_coverage=1 00:16:22.169 --rc genhtml_legend=1 00:16:22.169 --rc geninfo_all_blocks=1 00:16:22.169 --rc geninfo_unexecuted_blocks=1 00:16:22.169 00:16:22.169 ' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.169 --rc genhtml_branch_coverage=1 00:16:22.169 --rc genhtml_function_coverage=1 00:16:22.169 --rc genhtml_legend=1 00:16:22.169 --rc geninfo_all_blocks=1 00:16:22.169 --rc geninfo_unexecuted_blocks=1 00:16:22.169 00:16:22.169 ' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.169 --rc genhtml_branch_coverage=1 00:16:22.169 --rc genhtml_function_coverage=1 00:16:22.169 --rc genhtml_legend=1 00:16:22.169 --rc geninfo_all_blocks=1 00:16:22.169 --rc geninfo_unexecuted_blocks=1 00:16:22.169 00:16:22.169 ' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.169 --rc genhtml_branch_coverage=1 00:16:22.169 --rc genhtml_function_coverage=1 00:16:22.169 --rc genhtml_legend=1 00:16:22.169 --rc geninfo_all_blocks=1 00:16:22.169 --rc geninfo_unexecuted_blocks=1 00:16:22.169 00:16:22.169 ' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:22.169 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:22.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:16:22.170 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:30.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:30.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:30.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:30.328 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@247 -- # create_target_ns 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:30.328 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:30.329 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:30.329 10.0.0.1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:30.329 10.0.0.2 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:30.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.564 ms 00:16:30.329 00:16:30.329 --- 10.0.0.1 ping statistics --- 00:16:30.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.329 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:16:30.329 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:30.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:16:30.330 00:16:30.330 --- 10.0.0.2 ping statistics --- 00:16:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.330 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:16:30.330 ' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3048281 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3048281 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3048281 ']' 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:30.330 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:30.331 16:40:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:42.581 Initializing NVMe Controllers 00:16:42.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.581 Initialization complete. Launching workers. 00:16:42.581 ======================================================== 00:16:42.581 Latency(us) 00:16:42.581 Device Information : IOPS MiB/s Average min max 00:16:42.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18967.87 74.09 3373.48 590.34 15873.07 00:16:42.581 ======================================================== 00:16:42.581 Total : 18967.87 74.09 3373.48 590.34 15873.07 00:16:42.581 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:42.581 rmmod nvme_tcp 00:16:42.581 rmmod nvme_fabrics 00:16:42.581 rmmod nvme_keyring 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 3048281 ']' 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 3048281 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3048281 ']' 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3048281 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3048281 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3048281' 00:16:42.581 killing process with pid 3048281 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3048281 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3048281 00:16:42.581 nvmf threads initialize successfully 00:16:42.581 bdev subsystem init successfully 00:16:42.581 created a nvmf target service 00:16:42.581 create targets's poll groups done 00:16:42.581 all subsystems of target started 00:16:42.581 nvmf target is running 00:16:42.581 all subsystems of target stopped 00:16:42.581 destroy targets's poll groups done 00:16:42.581 destroyed the nvmf target service 00:16:42.581 bdev subsystem finish successfully 00:16:42.581 nvmf threads destroy successfully 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:42.581 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:16:42.582 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@254 -- # local dev 00:16:42.582 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:42.582 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:42.582 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:42.582 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # return 0 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@274 -- # iptr 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-save 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-restore 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 00:16:43.153 real 0m21.221s 00:16:43.153 user 0m46.877s 00:16:43.153 sys 0m6.683s 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 ************************************ 00:16:43.153 END TEST nvmf_example 00:16:43.153 ************************************ 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 ************************************ 00:16:43.153 START TEST nvmf_filesystem 00:16:43.153 ************************************ 00:16:43.153 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:43.417 * Looking for test storage... 00:16:43.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.417 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.418 --rc genhtml_branch_coverage=1 00:16:43.418 --rc genhtml_function_coverage=1 00:16:43.418 --rc genhtml_legend=1 00:16:43.418 --rc geninfo_all_blocks=1 00:16:43.418 --rc geninfo_unexecuted_blocks=1 00:16:43.418 00:16:43.418 ' 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.418 --rc genhtml_branch_coverage=1 00:16:43.418 --rc genhtml_function_coverage=1 00:16:43.418 --rc genhtml_legend=1 00:16:43.418 --rc geninfo_all_blocks=1 00:16:43.418 --rc geninfo_unexecuted_blocks=1 00:16:43.418 00:16:43.418 ' 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.418 --rc genhtml_branch_coverage=1 00:16:43.418 --rc genhtml_function_coverage=1 00:16:43.418 --rc genhtml_legend=1 00:16:43.418 --rc geninfo_all_blocks=1 00:16:43.418 --rc geninfo_unexecuted_blocks=1 00:16:43.418 00:16:43.418 ' 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:43.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.418 --rc genhtml_branch_coverage=1 00:16:43.418 --rc genhtml_function_coverage=1 00:16:43.418 --rc genhtml_legend=1 00:16:43.418 --rc geninfo_all_blocks=1 00:16:43.418 --rc geninfo_unexecuted_blocks=1 00:16:43.418 00:16:43.418 ' 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:43.418 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:43.419 #define SPDK_CONFIG_H 00:16:43.419 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:43.419 #define SPDK_CONFIG_APPS 1 00:16:43.419 #define SPDK_CONFIG_ARCH native 00:16:43.419 #undef SPDK_CONFIG_ASAN 00:16:43.419 #undef SPDK_CONFIG_AVAHI 00:16:43.419 #undef SPDK_CONFIG_CET 00:16:43.419 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:43.419 #define SPDK_CONFIG_COVERAGE 1 00:16:43.419 #define SPDK_CONFIG_CROSS_PREFIX 00:16:43.419 #undef SPDK_CONFIG_CRYPTO 00:16:43.419 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:43.419 #undef SPDK_CONFIG_CUSTOMOCF 00:16:43.419 #undef SPDK_CONFIG_DAOS 00:16:43.419 #define SPDK_CONFIG_DAOS_DIR 00:16:43.419 #define SPDK_CONFIG_DEBUG 1 00:16:43.419 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:43.419 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:43.419 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:43.419 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:43.419 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:43.419 #undef SPDK_CONFIG_DPDK_UADK 00:16:43.419 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:43.419 #define SPDK_CONFIG_EXAMPLES 1 00:16:43.419 #undef SPDK_CONFIG_FC 00:16:43.419 #define SPDK_CONFIG_FC_PATH 00:16:43.419 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:43.419 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:43.419 #define SPDK_CONFIG_FSDEV 1 00:16:43.419 #undef SPDK_CONFIG_FUSE 00:16:43.419 #undef SPDK_CONFIG_FUZZER 00:16:43.419 #define SPDK_CONFIG_FUZZER_LIB 00:16:43.419 #undef SPDK_CONFIG_GOLANG 00:16:43.419 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:43.419 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:43.419 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:43.419 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:43.419 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:43.419 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:43.419 #undef SPDK_CONFIG_HAVE_LZ4 00:16:43.419 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:43.419 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:43.419 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:43.419 #define SPDK_CONFIG_IDXD 1 00:16:43.419 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:43.419 #undef SPDK_CONFIG_IPSEC_MB 00:16:43.419 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:43.419 #define SPDK_CONFIG_ISAL 1 00:16:43.419 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:43.419 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:43.419 #define SPDK_CONFIG_LIBDIR 00:16:43.419 #undef SPDK_CONFIG_LTO 00:16:43.419 #define SPDK_CONFIG_MAX_LCORES 128 00:16:43.419 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:43.419 #define SPDK_CONFIG_NVME_CUSE 1 00:16:43.419 #undef SPDK_CONFIG_OCF 00:16:43.419 #define SPDK_CONFIG_OCF_PATH 00:16:43.419 #define SPDK_CONFIG_OPENSSL_PATH 00:16:43.419 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:43.419 #define SPDK_CONFIG_PGO_DIR 00:16:43.419 #undef SPDK_CONFIG_PGO_USE 00:16:43.419 #define SPDK_CONFIG_PREFIX /usr/local 00:16:43.419 #undef SPDK_CONFIG_RAID5F 00:16:43.419 #undef SPDK_CONFIG_RBD 00:16:43.419 #define SPDK_CONFIG_RDMA 1 00:16:43.419 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:43.419 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:43.419 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:43.419 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:43.419 #define SPDK_CONFIG_SHARED 1 00:16:43.419 #undef SPDK_CONFIG_SMA 00:16:43.419 #define SPDK_CONFIG_TESTS 1 00:16:43.419 #undef SPDK_CONFIG_TSAN 00:16:43.419 #define SPDK_CONFIG_UBLK 1 00:16:43.419 #define SPDK_CONFIG_UBSAN 1 00:16:43.419 #undef SPDK_CONFIG_UNIT_TESTS 00:16:43.419 #undef SPDK_CONFIG_URING 00:16:43.419 #define SPDK_CONFIG_URING_PATH 00:16:43.419 #undef SPDK_CONFIG_URING_ZNS 00:16:43.419 #undef SPDK_CONFIG_USDT 00:16:43.419 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:43.419 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:43.419 #define SPDK_CONFIG_VFIO_USER 1 00:16:43.419 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:43.419 #define SPDK_CONFIG_VHOST 1 00:16:43.419 #define SPDK_CONFIG_VIRTIO 1 00:16:43.419 #undef SPDK_CONFIG_VTUNE 00:16:43.419 #define SPDK_CONFIG_VTUNE_DIR 00:16:43.419 #define SPDK_CONFIG_WERROR 1 00:16:43.419 #define SPDK_CONFIG_WPDK_DIR 00:16:43.419 #undef SPDK_CONFIG_XNVME 00:16:43.419 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.419 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:43.420 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:43.421 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3051075 ]] 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3051075 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.yn64CI 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yn64CI/tests/target /tmp/spdk.yn64CI 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122536439808 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:16:43.422 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6820102144 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677433344 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=839680 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:16:43.423 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:16:43.685 * Looking for test storage... 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122536439808 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9034694656 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.685 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:43.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.686 --rc genhtml_branch_coverage=1 00:16:43.686 --rc genhtml_function_coverage=1 00:16:43.686 --rc genhtml_legend=1 00:16:43.686 --rc geninfo_all_blocks=1 00:16:43.686 --rc geninfo_unexecuted_blocks=1 00:16:43.686 00:16:43.686 ' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:43.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.686 --rc genhtml_branch_coverage=1 00:16:43.686 --rc genhtml_function_coverage=1 00:16:43.686 --rc genhtml_legend=1 00:16:43.686 --rc geninfo_all_blocks=1 00:16:43.686 --rc geninfo_unexecuted_blocks=1 00:16:43.686 00:16:43.686 ' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:43.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.686 --rc genhtml_branch_coverage=1 00:16:43.686 --rc genhtml_function_coverage=1 00:16:43.686 --rc genhtml_legend=1 00:16:43.686 --rc geninfo_all_blocks=1 00:16:43.686 --rc geninfo_unexecuted_blocks=1 00:16:43.686 00:16:43.686 ' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:43.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.686 --rc genhtml_branch_coverage=1 00:16:43.686 --rc genhtml_function_coverage=1 00:16:43.686 --rc genhtml_legend=1 00:16:43.686 --rc geninfo_all_blocks=1 00:16:43.686 --rc geninfo_unexecuted_blocks=1 00:16:43.686 00:16:43.686 ' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.686 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:43.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:16:43.687 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:51.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:51.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:51.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.985 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:51.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@247 -- # create_target_ns 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:51.986 10.0.0.1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:51.986 10.0.0.2 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:51.986 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:51.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.589 ms 00:16:51.987 00:16:51.987 --- 10.0.0.1 ping statistics --- 00:16:51.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.987 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:51.987 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:51.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:16:51.987 00:16:51.987 --- 10.0.0.2 ping statistics --- 00:16:51.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.987 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:16:51.987 ' 00:16:51.987 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:51.988 ************************************ 00:16:51.988 START TEST nvmf_filesystem_no_in_capsule 00:16:51.988 ************************************ 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=3054848 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 3054848 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3054848 ']' 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:51.988 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:51.988 [2024-11-05 16:40:58.243673] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:16:51.988 [2024-11-05 16:40:58.243740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.988 [2024-11-05 16:40:58.329406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.988 [2024-11-05 16:40:58.370592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.988 [2024-11-05 16:40:58.370630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.988 [2024-11-05 16:40:58.370638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.988 [2024-11-05 16:40:58.370646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.988 [2024-11-05 16:40:58.370652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.988 [2024-11-05 16:40:58.372524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.988 [2024-11-05 16:40:58.372666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.988 [2024-11-05 16:40:58.372823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.988 [2024-11-05 16:40:58.372823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.248 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 [2024-11-05 16:40:59.095742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 [2024-11-05 16:40:59.229331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:52.249 { 00:16:52.249 "name": "Malloc1", 00:16:52.249 "aliases": [ 00:16:52.249 "4746c32b-e300-455b-9004-70e610222f3b" 00:16:52.249 ], 00:16:52.249 "product_name": "Malloc disk", 00:16:52.249 "block_size": 512, 00:16:52.249 "num_blocks": 1048576, 00:16:52.249 "uuid": "4746c32b-e300-455b-9004-70e610222f3b", 00:16:52.249 "assigned_rate_limits": { 00:16:52.249 "rw_ios_per_sec": 0, 00:16:52.249 "rw_mbytes_per_sec": 0, 00:16:52.249 "r_mbytes_per_sec": 0, 00:16:52.249 "w_mbytes_per_sec": 0 00:16:52.249 }, 00:16:52.249 "claimed": true, 00:16:52.249 "claim_type": "exclusive_write", 00:16:52.249 "zoned": false, 00:16:52.249 "supported_io_types": { 00:16:52.249 "read": true, 00:16:52.249 "write": true, 00:16:52.249 "unmap": true, 00:16:52.249 "flush": true, 00:16:52.249 "reset": true, 00:16:52.249 "nvme_admin": false, 00:16:52.249 "nvme_io": false, 00:16:52.249 "nvme_io_md": false, 00:16:52.249 "write_zeroes": true, 00:16:52.249 "zcopy": true, 00:16:52.249 "get_zone_info": false, 00:16:52.249 "zone_management": false, 00:16:52.249 "zone_append": false, 00:16:52.249 "compare": false, 00:16:52.249 "compare_and_write": false, 00:16:52.249 "abort": true, 00:16:52.249 "seek_hole": false, 00:16:52.249 "seek_data": false, 00:16:52.249 "copy": true, 00:16:52.249 "nvme_iov_md": false 00:16:52.249 }, 00:16:52.249 "memory_domains": [ 00:16:52.249 { 00:16:52.249 "dma_device_id": "system", 00:16:52.249 "dma_device_type": 1 00:16:52.249 }, 00:16:52.249 { 00:16:52.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.249 "dma_device_type": 2 00:16:52.249 } 00:16:52.249 ], 00:16:52.249 "driver_specific": {} 00:16:52.249 } 00:16:52.249 ]' 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:16:52.249 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:52.509 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:16:52.510 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:16:52.510 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:16:52.510 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:52.510 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.895 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.895 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:16:53.895 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.895 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:53.895 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:55.811 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:56.072 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:56.072 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:56.072 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:57.458 ************************************ 00:16:57.458 START TEST filesystem_ext4 00:16:57.458 ************************************ 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:57.458 mke2fs 1.47.0 (5-Feb-2023) 00:16:57.458 Discarding device blocks: 0/522240 done 00:16:57.458 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:57.458 Filesystem UUID: 9e52d7cf-46af-453a-966b-35236e771b71 00:16:57.458 Superblock backups stored on blocks: 00:16:57.458 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:57.458 00:16:57.458 Allocating group tables: 0/64 done 00:16:57.458 Writing inode tables: 0/64 done 00:16:57.458 Creating journal (8192 blocks): done 00:16:57.458 Writing superblocks and filesystem accounting information: 0/64 done 00:16:57.458 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:16:57.458 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3054848 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:04.047 00:17:04.047 real 0m6.086s 00:17:04.047 user 0m0.030s 00:17:04.047 sys 0m0.081s 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 ************************************ 00:17:04.047 END TEST filesystem_ext4 00:17:04.047 ************************************ 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 ************************************ 00:17:04.047 START TEST filesystem_btrfs 00:17:04.047 ************************************ 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:04.047 btrfs-progs v6.8.1 00:17:04.047 See https://btrfs.readthedocs.io for more information. 00:17:04.047 00:17:04.047 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:04.047 NOTE: several default settings have changed in version 5.15, please make sure 00:17:04.047 this does not affect your deployments: 00:17:04.047 - DUP for metadata (-m dup) 00:17:04.047 - enabled no-holes (-O no-holes) 00:17:04.047 - enabled free-space-tree (-R free-space-tree) 00:17:04.047 00:17:04.047 Label: (null) 00:17:04.047 UUID: f840777b-86bc-42ac-8a4b-a8180a702d67 00:17:04.047 Node size: 16384 00:17:04.047 Sector size: 4096 (CPU page size: 4096) 00:17:04.047 Filesystem size: 510.00MiB 00:17:04.047 Block group profiles: 00:17:04.047 Data: single 8.00MiB 00:17:04.047 Metadata: DUP 32.00MiB 00:17:04.047 System: DUP 8.00MiB 00:17:04.047 SSD detected: yes 00:17:04.047 Zoned device: no 00:17:04.047 Features: extref, skinny-metadata, no-holes, free-space-tree 00:17:04.047 Checksum: crc32c 00:17:04.047 Number of devices: 1 00:17:04.047 Devices: 00:17:04.047 ID SIZE PATH 00:17:04.047 1 510.00MiB /dev/nvme0n1p1 00:17:04.047 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:17:04.047 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3054848 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:04.990 00:17:04.990 real 0m1.494s 00:17:04.990 user 0m0.039s 00:17:04.990 sys 0m0.109s 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:04.990 ************************************ 00:17:04.990 END TEST filesystem_btrfs 00:17:04.990 ************************************ 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.990 ************************************ 00:17:04.990 START TEST filesystem_xfs 00:17:04.990 ************************************ 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:04.990 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:17:04.991 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:04.991 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:04.991 = sectsz=512 attr=2, projid32bit=1 00:17:04.991 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:04.991 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:04.991 data = bsize=4096 blocks=130560, imaxpct=25 00:17:04.991 = sunit=0 swidth=0 blks 00:17:04.991 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:04.991 log =internal log bsize=4096 blocks=16384, version=2 00:17:04.991 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:04.991 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:06.376 Discarding blocks...Done. 00:17:06.376 16:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:17:06.376 16:41:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:08.288 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3054848 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:08.288 00:17:08.288 real 0m3.174s 00:17:08.288 user 0m0.024s 00:17:08.288 sys 0m0.079s 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:08.288 ************************************ 00:17:08.288 END TEST filesystem_xfs 00:17:08.288 ************************************ 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:08.288 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:08.549 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3054848 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3054848 ']' 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3054848 00:17:08.809 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3054848 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3054848' 00:17:08.810 killing process with pid 3054848 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3054848 00:17:08.810 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3054848 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:09.070 00:17:09.070 real 0m17.865s 00:17:09.070 user 1m10.550s 00:17:09.070 sys 0m1.463s 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.070 ************************************ 00:17:09.070 END TEST nvmf_filesystem_no_in_capsule 00:17:09.070 ************************************ 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:09.070 ************************************ 00:17:09.070 START TEST nvmf_filesystem_in_capsule 00:17:09.070 ************************************ 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=3058650 00:17:09.070 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 3058650 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3058650 ']' 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.332 [2024-11-05 16:41:16.191223] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:17:09.332 [2024-11-05 16:41:16.191272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.332 [2024-11-05 16:41:16.274233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.332 [2024-11-05 16:41:16.309611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.332 [2024-11-05 16:41:16.309646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.332 [2024-11-05 16:41:16.309654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.332 [2024-11-05 16:41:16.309660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.332 [2024-11-05 16:41:16.309666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.332 [2024-11-05 16:41:16.311202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.332 [2024-11-05 16:41:16.311315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.332 [2024-11-05 16:41:16.311470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.332 [2024-11-05 16:41:16.311471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.332 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.592 [2024-11-05 16:41:16.435351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.592 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.593 Malloc1 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.593 [2024-11-05 16:41:16.560301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:09.593 { 00:17:09.593 "name": "Malloc1", 00:17:09.593 "aliases": [ 00:17:09.593 "88dd1ffc-212f-48f0-aa80-a106ed36b2a3" 00:17:09.593 ], 00:17:09.593 "product_name": "Malloc disk", 00:17:09.593 "block_size": 512, 00:17:09.593 "num_blocks": 1048576, 00:17:09.593 "uuid": "88dd1ffc-212f-48f0-aa80-a106ed36b2a3", 00:17:09.593 "assigned_rate_limits": { 00:17:09.593 "rw_ios_per_sec": 0, 00:17:09.593 "rw_mbytes_per_sec": 0, 00:17:09.593 "r_mbytes_per_sec": 0, 00:17:09.593 "w_mbytes_per_sec": 0 00:17:09.593 }, 00:17:09.593 "claimed": true, 00:17:09.593 "claim_type": "exclusive_write", 00:17:09.593 "zoned": false, 00:17:09.593 "supported_io_types": { 00:17:09.593 "read": true, 00:17:09.593 "write": true, 00:17:09.593 "unmap": true, 00:17:09.593 "flush": true, 00:17:09.593 "reset": true, 00:17:09.593 "nvme_admin": false, 00:17:09.593 "nvme_io": false, 00:17:09.593 "nvme_io_md": false, 00:17:09.593 "write_zeroes": true, 00:17:09.593 "zcopy": true, 00:17:09.593 "get_zone_info": false, 00:17:09.593 "zone_management": false, 00:17:09.593 "zone_append": false, 00:17:09.593 "compare": false, 00:17:09.593 "compare_and_write": false, 00:17:09.593 "abort": true, 00:17:09.593 "seek_hole": false, 00:17:09.593 "seek_data": false, 00:17:09.593 "copy": true, 00:17:09.593 "nvme_iov_md": false 00:17:09.593 }, 00:17:09.593 "memory_domains": [ 00:17:09.593 { 00:17:09.593 "dma_device_id": "system", 00:17:09.593 "dma_device_type": 1 00:17:09.593 }, 00:17:09.593 { 00:17:09.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.593 "dma_device_type": 2 00:17:09.593 } 00:17:09.593 ], 00:17:09.593 "driver_specific": {} 00:17:09.593 } 00:17:09.593 ]' 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:17:09.593 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:09.853 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:17:09.853 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:17:09.853 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:17:09.853 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:09.853 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.237 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.237 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:17:11.237 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.237 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:11.237 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:13.148 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:13.410 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:17:14.352 16:41:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:15.297 ************************************ 00:17:15.297 START TEST filesystem_in_capsule_ext4 00:17:15.297 ************************************ 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:17:15.297 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:15.297 mke2fs 1.47.0 (5-Feb-2023) 00:17:15.297 Discarding device blocks: 0/522240 done 00:17:15.297 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:15.297 Filesystem UUID: 23dcd326-92e5-4cf4-832c-3be388b91fd2 00:17:15.297 Superblock backups stored on blocks: 00:17:15.297 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:15.297 00:17:15.297 Allocating group tables: 0/64 done 00:17:15.297 Writing inode tables: 0/64 done 00:17:15.297 Creating journal (8192 blocks): done 00:17:15.868 Writing superblocks and filesystem accounting information: 0/64 done 00:17:15.868 00:17:15.868 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:17:15.868 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:22.452 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3058650 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:22.452 00:17:22.452 real 0m6.935s 00:17:22.452 user 0m0.030s 00:17:22.452 sys 0m0.075s 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.452 ************************************ 00:17:22.452 END TEST filesystem_in_capsule_ext4 00:17:22.452 ************************************ 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:22.452 ************************************ 00:17:22.452 START TEST filesystem_in_capsule_btrfs 00:17:22.452 ************************************ 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:22.452 btrfs-progs v6.8.1 00:17:22.452 See https://btrfs.readthedocs.io for more information. 00:17:22.452 00:17:22.452 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:22.452 NOTE: several default settings have changed in version 5.15, please make sure 00:17:22.452 this does not affect your deployments: 00:17:22.452 - DUP for metadata (-m dup) 00:17:22.452 - enabled no-holes (-O no-holes) 00:17:22.452 - enabled free-space-tree (-R free-space-tree) 00:17:22.452 00:17:22.452 Label: (null) 00:17:22.452 UUID: fa2957b6-5c7f-4489-b239-d1375552a607 00:17:22.452 Node size: 16384 00:17:22.452 Sector size: 4096 (CPU page size: 4096) 00:17:22.452 Filesystem size: 510.00MiB 00:17:22.452 Block group profiles: 00:17:22.452 Data: single 8.00MiB 00:17:22.452 Metadata: DUP 32.00MiB 00:17:22.452 System: DUP 8.00MiB 00:17:22.452 SSD detected: yes 00:17:22.452 Zoned device: no 00:17:22.452 Features: extref, skinny-metadata, no-holes, free-space-tree 00:17:22.452 Checksum: crc32c 00:17:22.452 Number of devices: 1 00:17:22.452 Devices: 00:17:22.452 ID SIZE PATH 00:17:22.452 1 510.00MiB /dev/nvme0n1p1 00:17:22.452 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:17:22.452 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3058650 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:22.713 00:17:22.713 real 0m0.497s 00:17:22.713 user 0m0.031s 00:17:22.713 sys 0m0.113s 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:22.713 ************************************ 00:17:22.713 END TEST filesystem_in_capsule_btrfs 00:17:22.713 ************************************ 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:17:22.713 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:22.714 ************************************ 00:17:22.714 START TEST filesystem_in_capsule_xfs 00:17:22.714 ************************************ 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:17:22.714 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:22.974 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:22.974 = sectsz=512 attr=2, projid32bit=1 00:17:22.974 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:22.974 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:22.974 data = bsize=4096 blocks=130560, imaxpct=25 00:17:22.974 = sunit=0 swidth=0 blks 00:17:22.974 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:22.974 log =internal log bsize=4096 blocks=16384, version=2 00:17:22.974 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:22.974 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:23.545 Discarding blocks...Done. 00:17:23.545 16:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:17:23.545 16:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3058650 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:26.092 00:17:26.092 real 0m2.918s 00:17:26.092 user 0m0.020s 00:17:26.092 sys 0m0.085s 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:26.092 ************************************ 00:17:26.092 END TEST filesystem_in_capsule_xfs 00:17:26.092 ************************************ 00:17:26.092 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:26.092 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:26.356 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3058650 00:17:26.617 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3058650 ']' 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3058650 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3058650 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3058650' 00:17:26.618 killing process with pid 3058650 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3058650 00:17:26.618 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3058650 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:26.878 00:17:26.878 real 0m17.686s 00:17:26.878 user 1m9.853s 00:17:26.878 sys 0m1.334s 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:26.878 ************************************ 00:17:26.878 END TEST nvmf_filesystem_in_capsule 00:17:26.878 ************************************ 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:26.878 rmmod nvme_tcp 00:17:26.878 rmmod nvme_fabrics 00:17:26.878 rmmod nvme_keyring 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@254 -- # local dev 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:26.878 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # return 0 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@274 -- # iptr 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-save 00:17:29.428 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-restore 00:17:29.428 00:17:29.428 real 0m45.863s 00:17:29.428 user 2m22.820s 00:17:29.428 sys 0m8.647s 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:29.428 ************************************ 00:17:29.428 END TEST nvmf_filesystem 00:17:29.428 ************************************ 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.428 ************************************ 00:17:29.428 START TEST nvmf_target_discovery 00:17:29.428 ************************************ 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:29.428 * Looking for test storage... 00:17:29.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.428 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.429 --rc genhtml_branch_coverage=1 00:17:29.429 --rc genhtml_function_coverage=1 00:17:29.429 --rc genhtml_legend=1 00:17:29.429 --rc geninfo_all_blocks=1 00:17:29.429 --rc geninfo_unexecuted_blocks=1 00:17:29.429 00:17:29.429 ' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.429 --rc genhtml_branch_coverage=1 00:17:29.429 --rc genhtml_function_coverage=1 00:17:29.429 --rc genhtml_legend=1 00:17:29.429 --rc geninfo_all_blocks=1 00:17:29.429 --rc geninfo_unexecuted_blocks=1 00:17:29.429 00:17:29.429 ' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.429 --rc genhtml_branch_coverage=1 00:17:29.429 --rc genhtml_function_coverage=1 00:17:29.429 --rc genhtml_legend=1 00:17:29.429 --rc geninfo_all_blocks=1 00:17:29.429 --rc geninfo_unexecuted_blocks=1 00:17:29.429 00:17:29.429 ' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.429 --rc genhtml_branch_coverage=1 00:17:29.429 --rc genhtml_function_coverage=1 00:17:29.429 --rc genhtml_legend=1 00:17:29.429 --rc geninfo_all_blocks=1 00:17:29.429 --rc geninfo_unexecuted_blocks=1 00:17:29.429 00:17:29.429 ' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:29.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:17:29.429 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:36.127 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:36.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:36.128 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:36.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:36.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:36.128 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:36.392 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:36.393 10.0.0.1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:36.393 10.0.0.2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:36.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.686 ms 00:17:36.393 00:17:36.393 --- 10.0.0.1 ping statistics --- 00:17:36.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.393 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:36.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:17:36.393 00:17:36.393 --- 10.0.0.2 ping statistics --- 00:17:36.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.393 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:17:36.393 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:36.394 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.655 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:17:36.656 ' 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=3066572 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 3066572 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3066572 ']' 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:36.656 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.656 [2024-11-05 16:41:43.611827] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:17:36.656 [2024-11-05 16:41:43.611894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.656 [2024-11-05 16:41:43.697076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.917 [2024-11-05 16:41:43.739134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.917 [2024-11-05 16:41:43.739173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.917 [2024-11-05 16:41:43.739181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.917 [2024-11-05 16:41:43.739188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.917 [2024-11-05 16:41:43.739194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.917 [2024-11-05 16:41:43.741024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.917 [2024-11-05 16:41:43.741141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.917 [2024-11-05 16:41:43.741301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.917 [2024-11-05 16:41:43.741302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.490 [2024-11-05 16:41:44.468307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.490 Null1 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.490 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.491 [2024-11-05 16:41:44.528638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.491 Null2 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.491 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 Null3 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 Null4 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.752 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:17:38.014 00:17:38.014 Discovery Log Number of Records 6, Generation counter 6 00:17:38.014 =====Discovery Log Entry 0====== 00:17:38.014 trtype: tcp 00:17:38.014 adrfam: ipv4 00:17:38.014 subtype: current discovery subsystem 00:17:38.014 treq: not required 00:17:38.014 portid: 0 00:17:38.014 trsvcid: 4420 00:17:38.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:38.014 traddr: 10.0.0.2 00:17:38.014 eflags: explicit discovery connections, duplicate discovery information 00:17:38.014 sectype: none 00:17:38.014 =====Discovery Log Entry 1====== 00:17:38.014 trtype: tcp 00:17:38.014 adrfam: ipv4 00:17:38.014 subtype: nvme subsystem 00:17:38.014 treq: not required 00:17:38.014 portid: 0 00:17:38.014 trsvcid: 4420 00:17:38.014 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:38.014 traddr: 10.0.0.2 00:17:38.014 eflags: none 00:17:38.014 sectype: none 00:17:38.014 =====Discovery Log Entry 2====== 00:17:38.014 trtype: tcp 00:17:38.014 adrfam: ipv4 00:17:38.014 subtype: nvme subsystem 00:17:38.014 treq: not required 00:17:38.014 portid: 0 00:17:38.014 trsvcid: 4420 00:17:38.014 subnqn: nqn.2016-06.io.spdk:cnode2 00:17:38.014 traddr: 10.0.0.2 00:17:38.014 eflags: none 00:17:38.014 sectype: none 00:17:38.014 =====Discovery Log Entry 3====== 00:17:38.014 trtype: tcp 00:17:38.014 adrfam: ipv4 00:17:38.014 subtype: nvme subsystem 00:17:38.014 treq: not required 00:17:38.014 portid: 0 00:17:38.014 trsvcid: 4420 00:17:38.014 subnqn: nqn.2016-06.io.spdk:cnode3 00:17:38.014 traddr: 10.0.0.2 00:17:38.014 eflags: none 00:17:38.014 sectype: none 00:17:38.014 =====Discovery Log Entry 4====== 00:17:38.014 trtype: tcp 00:17:38.014 adrfam: ipv4 00:17:38.014 subtype: nvme subsystem 00:17:38.014 treq: not required 00:17:38.014 portid: 0 00:17:38.014 trsvcid: 4420 00:17:38.014 subnqn: nqn.2016-06.io.spdk:cnode4 00:17:38.014 traddr: 10.0.0.2 00:17:38.014 eflags: none 00:17:38.014 sectype: none 00:17:38.014 =====Discovery Log Entry 5====== 00:17:38.014 trtype: tcp 00:17:38.014 adrfam: ipv4 00:17:38.014 subtype: discovery subsystem referral 00:17:38.014 treq: not required 00:17:38.014 portid: 0 00:17:38.014 trsvcid: 4430 00:17:38.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:38.014 traddr: 10.0.0.2 00:17:38.014 eflags: none 00:17:38.014 sectype: none 00:17:38.014 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:17:38.014 Perform nvmf subsystem discovery via RPC 00:17:38.014 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:17:38.014 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.014 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.014 [ 00:17:38.014 { 00:17:38.014 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:38.014 "subtype": "Discovery", 00:17:38.015 "listen_addresses": [ 00:17:38.015 { 00:17:38.015 "trtype": "TCP", 00:17:38.015 "adrfam": "IPv4", 00:17:38.015 "traddr": "10.0.0.2", 00:17:38.015 "trsvcid": "4420" 00:17:38.015 } 00:17:38.015 ], 00:17:38.015 "allow_any_host": true, 00:17:38.015 "hosts": [] 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.015 "subtype": "NVMe", 00:17:38.015 "listen_addresses": [ 00:17:38.015 { 00:17:38.015 "trtype": "TCP", 00:17:38.015 "adrfam": "IPv4", 00:17:38.015 "traddr": "10.0.0.2", 00:17:38.015 "trsvcid": "4420" 00:17:38.015 } 00:17:38.015 ], 00:17:38.015 "allow_any_host": true, 00:17:38.015 "hosts": [], 00:17:38.015 "serial_number": "SPDK00000000000001", 00:17:38.015 "model_number": "SPDK bdev Controller", 00:17:38.015 "max_namespaces": 32, 00:17:38.015 "min_cntlid": 1, 00:17:38.015 "max_cntlid": 65519, 00:17:38.015 "namespaces": [ 00:17:38.015 { 00:17:38.015 "nsid": 1, 00:17:38.015 "bdev_name": "Null1", 00:17:38.015 "name": "Null1", 00:17:38.015 "nguid": "BCCCF30F9E7D4C65A359FC4A966EC7AB", 00:17:38.015 "uuid": "bcccf30f-9e7d-4c65-a359-fc4a966ec7ab" 00:17:38.015 } 00:17:38.015 ] 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:38.015 "subtype": "NVMe", 00:17:38.015 "listen_addresses": [ 00:17:38.015 { 00:17:38.015 "trtype": "TCP", 00:17:38.015 "adrfam": "IPv4", 00:17:38.015 "traddr": "10.0.0.2", 00:17:38.015 "trsvcid": "4420" 00:17:38.015 } 00:17:38.015 ], 00:17:38.015 "allow_any_host": true, 00:17:38.015 "hosts": [], 00:17:38.015 "serial_number": "SPDK00000000000002", 00:17:38.015 "model_number": "SPDK bdev Controller", 00:17:38.015 "max_namespaces": 32, 00:17:38.015 "min_cntlid": 1, 00:17:38.015 "max_cntlid": 65519, 00:17:38.015 "namespaces": [ 00:17:38.015 { 00:17:38.015 "nsid": 1, 00:17:38.015 "bdev_name": "Null2", 00:17:38.015 "name": "Null2", 00:17:38.015 "nguid": "3FCE57D10BCA48B58A71453B19BE746A", 00:17:38.015 "uuid": "3fce57d1-0bca-48b5-8a71-453b19be746a" 00:17:38.015 } 00:17:38.015 ] 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:17:38.015 "subtype": "NVMe", 00:17:38.015 "listen_addresses": [ 00:17:38.015 { 00:17:38.015 "trtype": "TCP", 00:17:38.015 "adrfam": "IPv4", 00:17:38.015 "traddr": "10.0.0.2", 00:17:38.015 "trsvcid": "4420" 00:17:38.015 } 00:17:38.015 ], 00:17:38.015 "allow_any_host": true, 00:17:38.015 "hosts": [], 00:17:38.015 "serial_number": "SPDK00000000000003", 00:17:38.015 "model_number": "SPDK bdev Controller", 00:17:38.015 "max_namespaces": 32, 00:17:38.015 "min_cntlid": 1, 00:17:38.015 "max_cntlid": 65519, 00:17:38.015 "namespaces": [ 00:17:38.015 { 00:17:38.015 "nsid": 1, 00:17:38.015 "bdev_name": "Null3", 00:17:38.015 "name": "Null3", 00:17:38.015 "nguid": "CACE504D189D422084E8B4EE3D116908", 00:17:38.015 "uuid": "cace504d-189d-4220-84e8-b4ee3d116908" 00:17:38.015 } 00:17:38.015 ] 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:17:38.015 "subtype": "NVMe", 00:17:38.015 "listen_addresses": [ 00:17:38.015 { 00:17:38.015 "trtype": "TCP", 00:17:38.015 "adrfam": "IPv4", 00:17:38.015 "traddr": "10.0.0.2", 00:17:38.015 "trsvcid": "4420" 00:17:38.015 } 00:17:38.015 ], 00:17:38.015 "allow_any_host": true, 00:17:38.015 "hosts": [], 00:17:38.015 "serial_number": "SPDK00000000000004", 00:17:38.015 "model_number": "SPDK bdev Controller", 00:17:38.015 "max_namespaces": 32, 00:17:38.015 "min_cntlid": 1, 00:17:38.015 "max_cntlid": 65519, 00:17:38.015 "namespaces": [ 00:17:38.015 { 00:17:38.015 "nsid": 1, 00:17:38.015 "bdev_name": "Null4", 00:17:38.015 "name": "Null4", 00:17:38.015 "nguid": "1CEAA4B78F4C4A399BE018B586AD328A", 00:17:38.015 "uuid": "1ceaa4b7-8f4c-4a39-9be0-18b586ad328a" 00:17:38.015 } 00:17:38.015 ] 00:17:38.015 } 00:17:38.015 ] 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:38.276 rmmod nvme_tcp 00:17:38.276 rmmod nvme_fabrics 00:17:38.276 rmmod nvme_keyring 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 3066572 ']' 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 3066572 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3066572 ']' 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3066572 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3066572 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3066572' 00:17:38.276 killing process with pid 3066572 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3066572 00:17:38.276 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3066572 00:17:38.537 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:38.537 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:17:38.537 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@254 -- # local dev 00:17:38.537 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:38.537 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:38.538 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:38.538 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # return 0 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@274 -- # iptr 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-save 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:17:40.452 00:17:40.452 real 0m11.370s 00:17:40.452 user 0m8.777s 00:17:40.452 sys 0m5.852s 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.452 ************************************ 00:17:40.452 END TEST nvmf_target_discovery 00:17:40.452 ************************************ 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:40.452 16:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.713 ************************************ 00:17:40.713 START TEST nvmf_referrals 00:17:40.713 ************************************ 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:40.713 * Looking for test storage... 00:17:40.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.713 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:40.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.713 --rc genhtml_branch_coverage=1 00:17:40.713 --rc genhtml_function_coverage=1 00:17:40.713 --rc genhtml_legend=1 00:17:40.713 --rc geninfo_all_blocks=1 00:17:40.714 --rc geninfo_unexecuted_blocks=1 00:17:40.714 00:17:40.714 ' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.714 --rc genhtml_branch_coverage=1 00:17:40.714 --rc genhtml_function_coverage=1 00:17:40.714 --rc genhtml_legend=1 00:17:40.714 --rc geninfo_all_blocks=1 00:17:40.714 --rc geninfo_unexecuted_blocks=1 00:17:40.714 00:17:40.714 ' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.714 --rc genhtml_branch_coverage=1 00:17:40.714 --rc genhtml_function_coverage=1 00:17:40.714 --rc genhtml_legend=1 00:17:40.714 --rc geninfo_all_blocks=1 00:17:40.714 --rc geninfo_unexecuted_blocks=1 00:17:40.714 00:17:40.714 ' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.714 --rc genhtml_branch_coverage=1 00:17:40.714 --rc genhtml_function_coverage=1 00:17:40.714 --rc genhtml_legend=1 00:17:40.714 --rc geninfo_all_blocks=1 00:17:40.714 --rc geninfo_unexecuted_blocks=1 00:17:40.714 00:17:40.714 ' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:40.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:17:40.714 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:48.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:48.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:48.858 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:48.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:48.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@247 -- # create_target_ns 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:48.859 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:48.859 10.0.0.1 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:48.859 10.0.0.2 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:48.859 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:48.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.686 ms 00:17:48.860 00:17:48.860 --- 10.0.0.1 ping statistics --- 00:17:48.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.860 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:48.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:17:48.860 00:17:48.860 --- 10.0.0.2 ping statistics --- 00:17:48.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.860 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:48.860 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:17:48.861 ' 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=3071030 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 3071030 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3071030 ']' 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:48.861 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 [2024-11-05 16:41:55.372076] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:17:48.861 [2024-11-05 16:41:55.372146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.861 [2024-11-05 16:41:55.454917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.861 [2024-11-05 16:41:55.496542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.861 [2024-11-05 16:41:55.496579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.861 [2024-11-05 16:41:55.496587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.861 [2024-11-05 16:41:55.496593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.861 [2024-11-05 16:41:55.496599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.861 [2024-11-05 16:41:55.498437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.861 [2024-11-05 16:41:55.498537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.861 [2024-11-05 16:41:55.498693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.861 [2024-11-05 16:41:55.498694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.122 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:49.122 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:17:49.122 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:49.122 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:49.122 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 [2024-11-05 16:41:56.229571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 [2024-11-05 16:41:56.245773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:49.383 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:49.644 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:49.905 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:50.166 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:50.427 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:50.687 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:50.947 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:50.947 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:50.947 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:50.947 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:50.947 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:50.947 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:50.947 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:50.947 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:50.947 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.947 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:51.208 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:51.468 rmmod nvme_tcp 00:17:51.468 rmmod nvme_fabrics 00:17:51.468 rmmod nvme_keyring 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 3071030 ']' 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 3071030 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3071030 ']' 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3071030 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3071030 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:51.468 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3071030' 00:17:51.468 killing process with pid 3071030 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3071030 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3071030 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@254 -- # local dev 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:51.469 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # return 0 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@274 -- # iptr 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-save 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-restore 00:17:54.013 00:17:54.013 real 0m13.064s 00:17:54.013 user 0m15.287s 00:17:54.013 sys 0m6.432s 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:54.013 ************************************ 00:17:54.013 END TEST nvmf_referrals 00:17:54.013 ************************************ 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.013 ************************************ 00:17:54.013 START TEST nvmf_connect_disconnect 00:17:54.013 ************************************ 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:54.013 * Looking for test storage... 00:17:54.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:54.013 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:54.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.014 --rc genhtml_branch_coverage=1 00:17:54.014 --rc genhtml_function_coverage=1 00:17:54.014 --rc genhtml_legend=1 00:17:54.014 --rc geninfo_all_blocks=1 00:17:54.014 --rc geninfo_unexecuted_blocks=1 00:17:54.014 00:17:54.014 ' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:54.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.014 --rc genhtml_branch_coverage=1 00:17:54.014 --rc genhtml_function_coverage=1 00:17:54.014 --rc genhtml_legend=1 00:17:54.014 --rc geninfo_all_blocks=1 00:17:54.014 --rc geninfo_unexecuted_blocks=1 00:17:54.014 00:17:54.014 ' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:54.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.014 --rc genhtml_branch_coverage=1 00:17:54.014 --rc genhtml_function_coverage=1 00:17:54.014 --rc genhtml_legend=1 00:17:54.014 --rc geninfo_all_blocks=1 00:17:54.014 --rc geninfo_unexecuted_blocks=1 00:17:54.014 00:17:54.014 ' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:54.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.014 --rc genhtml_branch_coverage=1 00:17:54.014 --rc genhtml_function_coverage=1 00:17:54.014 --rc genhtml_legend=1 00:17:54.014 --rc geninfo_all_blocks=1 00:17:54.014 --rc geninfo_unexecuted_blocks=1 00:17:54.014 00:17:54.014 ' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:54.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:54.014 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:17:54.015 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:02.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:02.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:02.156 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:02.156 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:18:02.156 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:02.157 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:02.157 10.0.0.1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:02.157 10.0.0.2 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:02.157 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:02.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.662 ms 00:18:02.157 00:18:02.158 --- 10.0.0.1 ping statistics --- 00:18:02.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.158 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:02.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:18:02.158 00:18:02.158 --- 10.0.0.2 ping statistics --- 00:18:02.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.158 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:18:02.158 ' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:02.158 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=3076220 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 3076220 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3076220 ']' 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.159 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:02.159 [2024-11-05 16:42:08.459693] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:18:02.159 [2024-11-05 16:42:08.459757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.159 [2024-11-05 16:42:08.538457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.159 [2024-11-05 16:42:08.576053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.159 [2024-11-05 16:42:08.576086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.159 [2024-11-05 16:42:08.576094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.159 [2024-11-05 16:42:08.576101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.159 [2024-11-05 16:42:08.576107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.159 [2024-11-05 16:42:08.577807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.159 [2024-11-05 16:42:08.578060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.159 [2024-11-05 16:42:08.578220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.159 [2024-11-05 16:42:08.578221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 [2024-11-05 16:42:09.308281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 [2024-11-05 16:42:09.380084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:18:02.420 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:18:06.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:18:20.742 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:20.743 rmmod nvme_tcp 00:18:20.743 rmmod nvme_fabrics 00:18:20.743 rmmod nvme_keyring 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 3076220 ']' 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 3076220 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3076220 ']' 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3076220 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3076220 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3076220' 00:18:20.743 killing process with pid 3076220 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3076220 00:18:20.743 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3076220 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@254 -- # local dev 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:21.003 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # return 0 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@274 -- # iptr 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:23.119 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:18:23.119 00:18:23.119 real 0m29.326s 00:18:23.119 user 1m19.209s 00:18:23.119 sys 0m7.049s 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:23.119 ************************************ 00:18:23.119 END TEST nvmf_connect_disconnect 00:18:23.119 ************************************ 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.119 ************************************ 00:18:23.119 START TEST nvmf_multitarget 00:18:23.119 ************************************ 00:18:23.119 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:23.383 * Looking for test storage... 00:18:23.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.383 --rc genhtml_branch_coverage=1 00:18:23.383 --rc genhtml_function_coverage=1 00:18:23.383 --rc genhtml_legend=1 00:18:23.383 --rc geninfo_all_blocks=1 00:18:23.383 --rc geninfo_unexecuted_blocks=1 00:18:23.383 00:18:23.383 ' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.383 --rc genhtml_branch_coverage=1 00:18:23.383 --rc genhtml_function_coverage=1 00:18:23.383 --rc genhtml_legend=1 00:18:23.383 --rc geninfo_all_blocks=1 00:18:23.383 --rc geninfo_unexecuted_blocks=1 00:18:23.383 00:18:23.383 ' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.383 --rc genhtml_branch_coverage=1 00:18:23.383 --rc genhtml_function_coverage=1 00:18:23.383 --rc genhtml_legend=1 00:18:23.383 --rc geninfo_all_blocks=1 00:18:23.383 --rc geninfo_unexecuted_blocks=1 00:18:23.383 00:18:23.383 ' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.383 --rc genhtml_branch_coverage=1 00:18:23.383 --rc genhtml_function_coverage=1 00:18:23.383 --rc genhtml_legend=1 00:18:23.383 --rc geninfo_all_blocks=1 00:18:23.383 --rc geninfo_unexecuted_blocks=1 00:18:23.383 00:18:23.383 ' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.383 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:23.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:18:23.384 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:31.533 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:31.533 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:31.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:31.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:31.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@247 -- # create_target_ns 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:31.534 10.0.0.1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:31.534 10.0.0.2 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:31.534 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:31.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.581 ms 00:18:31.535 00:18:31.535 --- 10.0.0.1 ping statistics --- 00:18:31.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.535 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:31.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:18:31.535 00:18:31.535 --- 10.0.0.2 ping statistics --- 00:18:31.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.535 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:31.535 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:18:31.536 ' 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=3084826 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 3084826 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3084826 ']' 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.536 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.536 [2024-11-05 16:42:37.841092] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:18:31.536 [2024-11-05 16:42:37.841143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.536 [2024-11-05 16:42:37.918937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.536 [2024-11-05 16:42:37.955474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.536 [2024-11-05 16:42:37.955508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.536 [2024-11-05 16:42:37.955516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.536 [2024-11-05 16:42:37.955522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.536 [2024-11-05 16:42:37.955528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.536 [2024-11-05 16:42:37.957058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.536 [2024-11-05 16:42:37.957169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.536 [2024-11-05 16:42:37.957323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.536 [2024-11-05 16:42:37.957324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:31.798 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:32.059 "nvmf_tgt_1" 00:18:32.059 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:32.059 "nvmf_tgt_2" 00:18:32.059 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:32.059 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:32.059 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:32.059 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:32.319 true 00:18:32.319 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:32.319 true 00:18:32.319 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:32.319 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:32.580 rmmod nvme_tcp 00:18:32.580 rmmod nvme_fabrics 00:18:32.580 rmmod nvme_keyring 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 3084826 ']' 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 3084826 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3084826 ']' 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3084826 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3084826 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3084826' 00:18:32.580 killing process with pid 3084826 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3084826 00:18:32.580 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3084826 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@254 -- # local dev 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:32.840 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # return 0 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@274 -- # iptr 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-save 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-restore 00:18:34.753 00:18:34.753 real 0m11.672s 00:18:34.753 user 0m9.886s 00:18:34.753 sys 0m6.014s 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:34.753 ************************************ 00:18:34.753 END TEST nvmf_multitarget 00:18:34.753 ************************************ 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:34.753 16:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.014 ************************************ 00:18:35.014 START TEST nvmf_rpc 00:18:35.014 ************************************ 00:18:35.014 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:35.014 * Looking for test storage... 00:18:35.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.014 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:35.014 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:18:35.014 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.014 --rc genhtml_branch_coverage=1 00:18:35.014 --rc genhtml_function_coverage=1 00:18:35.014 --rc genhtml_legend=1 00:18:35.014 --rc geninfo_all_blocks=1 00:18:35.014 --rc geninfo_unexecuted_blocks=1 00:18:35.014 00:18:35.014 ' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.014 --rc genhtml_branch_coverage=1 00:18:35.014 --rc genhtml_function_coverage=1 00:18:35.014 --rc genhtml_legend=1 00:18:35.014 --rc geninfo_all_blocks=1 00:18:35.014 --rc geninfo_unexecuted_blocks=1 00:18:35.014 00:18:35.014 ' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.014 --rc genhtml_branch_coverage=1 00:18:35.014 --rc genhtml_function_coverage=1 00:18:35.014 --rc genhtml_legend=1 00:18:35.014 --rc geninfo_all_blocks=1 00:18:35.014 --rc geninfo_unexecuted_blocks=1 00:18:35.014 00:18:35.014 ' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:35.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.014 --rc genhtml_branch_coverage=1 00:18:35.014 --rc genhtml_function_coverage=1 00:18:35.014 --rc genhtml_legend=1 00:18:35.014 --rc geninfo_all_blocks=1 00:18:35.014 --rc geninfo_unexecuted_blocks=1 00:18:35.014 00:18:35.014 ' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.014 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:35.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:18:35.277 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:41.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:41.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:41.872 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:41.873 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:41.873 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:41.873 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@247 -- # create_target_ns 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:42.134 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:42.134 10.0.0.1 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:42.134 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:42.135 10.0.0.2 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:42.135 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:42.396 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:42.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.653 ms 00:18:42.397 00:18:42.397 --- 10.0.0.1 ping statistics --- 00:18:42.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.397 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:42.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:18:42.397 00:18:42.397 --- 10.0.0.2 ping statistics --- 00:18:42.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.397 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:42.397 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:18:42.398 ' 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=3089419 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 3089419 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3089419 ']' 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.398 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.398 [2024-11-05 16:42:49.452455] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:18:42.398 [2024-11-05 16:42:49.452509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.659 [2024-11-05 16:42:49.531476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.659 [2024-11-05 16:42:49.568297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.659 [2024-11-05 16:42:49.568335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.659 [2024-11-05 16:42:49.568343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.659 [2024-11-05 16:42:49.568349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.659 [2024-11-05 16:42:49.568355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.659 [2024-11-05 16:42:49.570098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.659 [2024-11-05 16:42:49.570197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.659 [2024-11-05 16:42:49.570346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.659 [2024-11-05 16:42:49.570347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.228 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:43.228 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:18:43.228 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:43.228 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.228 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:43.505 "tick_rate": 2400000000, 00:18:43.505 "poll_groups": [ 00:18:43.505 { 00:18:43.505 "name": "nvmf_tgt_poll_group_000", 00:18:43.505 "admin_qpairs": 0, 00:18:43.505 "io_qpairs": 0, 00:18:43.505 "current_admin_qpairs": 0, 00:18:43.505 "current_io_qpairs": 0, 00:18:43.505 "pending_bdev_io": 0, 00:18:43.505 "completed_nvme_io": 0, 00:18:43.505 "transports": [] 00:18:43.505 }, 00:18:43.505 { 00:18:43.505 "name": "nvmf_tgt_poll_group_001", 00:18:43.505 "admin_qpairs": 0, 00:18:43.505 "io_qpairs": 0, 00:18:43.505 "current_admin_qpairs": 0, 00:18:43.505 "current_io_qpairs": 0, 00:18:43.505 "pending_bdev_io": 0, 00:18:43.505 "completed_nvme_io": 0, 00:18:43.505 "transports": [] 00:18:43.505 }, 00:18:43.505 { 00:18:43.505 "name": "nvmf_tgt_poll_group_002", 00:18:43.505 "admin_qpairs": 0, 00:18:43.505 "io_qpairs": 0, 00:18:43.505 "current_admin_qpairs": 0, 00:18:43.505 "current_io_qpairs": 0, 00:18:43.505 "pending_bdev_io": 0, 00:18:43.505 "completed_nvme_io": 0, 00:18:43.505 "transports": [] 00:18:43.505 }, 00:18:43.505 { 00:18:43.505 "name": "nvmf_tgt_poll_group_003", 00:18:43.505 "admin_qpairs": 0, 00:18:43.505 "io_qpairs": 0, 00:18:43.505 "current_admin_qpairs": 0, 00:18:43.505 "current_io_qpairs": 0, 00:18:43.505 "pending_bdev_io": 0, 00:18:43.505 "completed_nvme_io": 0, 00:18:43.505 "transports": [] 00:18:43.505 } 00:18:43.505 ] 00:18:43.505 }' 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:43.505 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.506 [2024-11-05 16:42:50.416617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:43.506 "tick_rate": 2400000000, 00:18:43.506 "poll_groups": [ 00:18:43.506 { 00:18:43.506 "name": "nvmf_tgt_poll_group_000", 00:18:43.506 "admin_qpairs": 0, 00:18:43.506 "io_qpairs": 0, 00:18:43.506 "current_admin_qpairs": 0, 00:18:43.506 "current_io_qpairs": 0, 00:18:43.506 "pending_bdev_io": 0, 00:18:43.506 "completed_nvme_io": 0, 00:18:43.506 "transports": [ 00:18:43.506 { 00:18:43.506 "trtype": "TCP" 00:18:43.506 } 00:18:43.506 ] 00:18:43.506 }, 00:18:43.506 { 00:18:43.506 "name": "nvmf_tgt_poll_group_001", 00:18:43.506 "admin_qpairs": 0, 00:18:43.506 "io_qpairs": 0, 00:18:43.506 "current_admin_qpairs": 0, 00:18:43.506 "current_io_qpairs": 0, 00:18:43.506 "pending_bdev_io": 0, 00:18:43.506 "completed_nvme_io": 0, 00:18:43.506 "transports": [ 00:18:43.506 { 00:18:43.506 "trtype": "TCP" 00:18:43.506 } 00:18:43.506 ] 00:18:43.506 }, 00:18:43.506 { 00:18:43.506 "name": "nvmf_tgt_poll_group_002", 00:18:43.506 "admin_qpairs": 0, 00:18:43.506 "io_qpairs": 0, 00:18:43.506 "current_admin_qpairs": 0, 00:18:43.506 "current_io_qpairs": 0, 00:18:43.506 "pending_bdev_io": 0, 00:18:43.506 "completed_nvme_io": 0, 00:18:43.506 "transports": [ 00:18:43.506 { 00:18:43.506 "trtype": "TCP" 00:18:43.506 } 00:18:43.506 ] 00:18:43.506 }, 00:18:43.506 { 00:18:43.506 "name": "nvmf_tgt_poll_group_003", 00:18:43.506 "admin_qpairs": 0, 00:18:43.506 "io_qpairs": 0, 00:18:43.506 "current_admin_qpairs": 0, 00:18:43.506 "current_io_qpairs": 0, 00:18:43.506 "pending_bdev_io": 0, 00:18:43.506 "completed_nvme_io": 0, 00:18:43.506 "transports": [ 00:18:43.506 { 00:18:43.506 "trtype": "TCP" 00:18:43.506 } 00:18:43.506 ] 00:18:43.506 } 00:18:43.506 ] 00:18:43.506 }' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.506 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 Malloc1 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 [2024-11-05 16:42:50.623728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.767 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:43.768 [2024-11-05 16:42:50.660512] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:18:43.768 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:43.768 could not add new controller: failed to write to nvme-fabrics device 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:45.676 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:45.676 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:45.676 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.677 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:45.677 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:47.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.590 [2024-11-05 16:42:54.427656] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:18:47.590 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:47.590 could not add new controller: failed to write to nvme-fabrics device 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.590 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.975 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.975 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:48.975 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.975 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:48.975 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.522 [2024-11-05 16:42:58.199982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.522 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:52.917 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:52.917 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:52.917 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.917 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:52.917 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.833 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.093 [2024-11-05 16:43:01.927471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.093 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.094 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:56.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:56.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:56.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:56.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 [2024-11-05 16:43:05.703871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.022 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:00.406 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:00.406 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:19:00.406 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.406 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:00.406 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:19:02.317 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.578 [2024-11-05 16:43:09.473559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:03.959 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.959 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:19:03.959 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.959 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:03.959 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:19:06.502 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:06.502 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:06.502 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 [2024-11-05 16:43:13.204856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:07.887 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:07.887 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:19:07.887 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.887 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:07.887 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:09.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:09.797 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 [2024-11-05 16:43:16.929287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 [2024-11-05 16:43:16.997466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.057 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.057 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.057 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.057 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 [2024-11-05 16:43:17.065669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.058 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 [2024-11-05 16:43:17.129867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 [2024-11-05 16:43:17.198087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:10.319 "tick_rate": 2400000000, 00:19:10.319 "poll_groups": [ 00:19:10.319 { 00:19:10.319 "name": "nvmf_tgt_poll_group_000", 00:19:10.319 "admin_qpairs": 0, 00:19:10.319 "io_qpairs": 224, 00:19:10.319 "current_admin_qpairs": 0, 00:19:10.319 "current_io_qpairs": 0, 00:19:10.319 "pending_bdev_io": 0, 00:19:10.319 "completed_nvme_io": 225, 00:19:10.319 "transports": [ 00:19:10.319 { 00:19:10.319 "trtype": "TCP" 00:19:10.319 } 00:19:10.319 ] 00:19:10.319 }, 00:19:10.319 { 00:19:10.319 "name": "nvmf_tgt_poll_group_001", 00:19:10.319 "admin_qpairs": 1, 00:19:10.319 "io_qpairs": 223, 00:19:10.319 "current_admin_qpairs": 0, 00:19:10.319 "current_io_qpairs": 0, 00:19:10.319 "pending_bdev_io": 0, 00:19:10.319 "completed_nvme_io": 238, 00:19:10.319 "transports": [ 00:19:10.319 { 00:19:10.319 "trtype": "TCP" 00:19:10.319 } 00:19:10.319 ] 00:19:10.319 }, 00:19:10.319 { 00:19:10.319 "name": "nvmf_tgt_poll_group_002", 00:19:10.319 "admin_qpairs": 6, 00:19:10.319 "io_qpairs": 218, 00:19:10.319 "current_admin_qpairs": 0, 00:19:10.319 "current_io_qpairs": 0, 00:19:10.319 "pending_bdev_io": 0, 00:19:10.319 "completed_nvme_io": 267, 00:19:10.319 "transports": [ 00:19:10.319 { 00:19:10.319 "trtype": "TCP" 00:19:10.319 } 00:19:10.319 ] 00:19:10.319 }, 00:19:10.319 { 00:19:10.319 "name": "nvmf_tgt_poll_group_003", 00:19:10.319 "admin_qpairs": 0, 00:19:10.319 "io_qpairs": 224, 00:19:10.319 "current_admin_qpairs": 0, 00:19:10.319 "current_io_qpairs": 0, 00:19:10.319 "pending_bdev_io": 0, 00:19:10.319 "completed_nvme_io": 509, 00:19:10.319 "transports": [ 00:19:10.319 { 00:19:10.319 "trtype": "TCP" 00:19:10.319 } 00:19:10.319 ] 00:19:10.319 } 00:19:10.319 ] 00:19:10.319 }' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:10.319 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:10.319 rmmod nvme_tcp 00:19:10.580 rmmod nvme_fabrics 00:19:10.580 rmmod nvme_keyring 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 3089419 ']' 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 3089419 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3089419 ']' 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3089419 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3089419 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3089419' 00:19:10.580 killing process with pid 3089419 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3089419 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3089419 00:19:10.580 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:10.841 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:19:10.841 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@254 -- # local dev 00:19:10.841 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:10.841 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:10.841 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:10.841 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # return 0 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@274 -- # iptr 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-save 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-restore 00:19:12.752 00:19:12.752 real 0m37.884s 00:19:12.752 user 1m54.191s 00:19:12.752 sys 0m7.673s 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.752 ************************************ 00:19:12.752 END TEST nvmf_rpc 00:19:12.752 ************************************ 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.752 ************************************ 00:19:12.752 START TEST nvmf_invalid 00:19:12.752 ************************************ 00:19:12.752 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:13.014 * Looking for test storage... 00:19:13.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:19:13.014 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:19:13.014 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:13.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.015 --rc genhtml_branch_coverage=1 00:19:13.015 --rc genhtml_function_coverage=1 00:19:13.015 --rc genhtml_legend=1 00:19:13.015 --rc geninfo_all_blocks=1 00:19:13.015 --rc geninfo_unexecuted_blocks=1 00:19:13.015 00:19:13.015 ' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:13.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.015 --rc genhtml_branch_coverage=1 00:19:13.015 --rc genhtml_function_coverage=1 00:19:13.015 --rc genhtml_legend=1 00:19:13.015 --rc geninfo_all_blocks=1 00:19:13.015 --rc geninfo_unexecuted_blocks=1 00:19:13.015 00:19:13.015 ' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:13.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.015 --rc genhtml_branch_coverage=1 00:19:13.015 --rc genhtml_function_coverage=1 00:19:13.015 --rc genhtml_legend=1 00:19:13.015 --rc geninfo_all_blocks=1 00:19:13.015 --rc geninfo_unexecuted_blocks=1 00:19:13.015 00:19:13.015 ' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:13.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.015 --rc genhtml_branch_coverage=1 00:19:13.015 --rc genhtml_function_coverage=1 00:19:13.015 --rc genhtml_legend=1 00:19:13.015 --rc geninfo_all_blocks=1 00:19:13.015 --rc geninfo_unexecuted_blocks=1 00:19:13.015 00:19:13.015 ' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:13.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:19:13.015 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:19.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:19.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:19.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:19.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@247 -- # create_target_ns 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:19.605 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:19.866 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:19.867 10.0.0.1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:19.867 10.0.0.2 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:19.867 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:20.128 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:20.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.645 ms 00:19:20.129 00:19:20.129 --- 10.0.0.1 ping statistics --- 00:19:20.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.129 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:20.129 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:20.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:19:20.129 00:19:20.129 --- 10.0.0.2 ping statistics --- 00:19:20.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.129 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:20.129 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:20.130 ' 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=3099098 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 3099098 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3099098 ']' 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:20.130 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.130 [2024-11-05 16:43:27.172705] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:19:20.130 [2024-11-05 16:43:27.172781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.390 [2024-11-05 16:43:27.255998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.390 [2024-11-05 16:43:27.299727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.390 [2024-11-05 16:43:27.299770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.390 [2024-11-05 16:43:27.299778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.390 [2024-11-05 16:43:27.299785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.390 [2024-11-05 16:43:27.299791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.390 [2024-11-05 16:43:27.301686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.390 [2024-11-05 16:43:27.301796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.390 [2024-11-05 16:43:27.302030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.390 [2024-11-05 16:43:27.302031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.961 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.961 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:19:20.961 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:20.961 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.961 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.961 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.961 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:20.961 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15366 00:19:21.221 [2024-11-05 16:43:28.173743] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:21.221 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:21.221 { 00:19:21.221 "nqn": "nqn.2016-06.io.spdk:cnode15366", 00:19:21.221 "tgt_name": "foobar", 00:19:21.221 "method": "nvmf_create_subsystem", 00:19:21.221 "req_id": 1 00:19:21.221 } 00:19:21.221 Got JSON-RPC error response 00:19:21.221 response: 00:19:21.221 { 00:19:21.221 "code": -32603, 00:19:21.221 "message": "Unable to find target foobar" 00:19:21.221 }' 00:19:21.221 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:21.221 { 00:19:21.221 "nqn": "nqn.2016-06.io.spdk:cnode15366", 00:19:21.221 "tgt_name": "foobar", 00:19:21.221 "method": "nvmf_create_subsystem", 00:19:21.221 "req_id": 1 00:19:21.221 } 00:19:21.221 Got JSON-RPC error response 00:19:21.221 response: 00:19:21.221 { 00:19:21.221 "code": -32603, 00:19:21.221 "message": "Unable to find target foobar" 00:19:21.221 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:21.221 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:21.221 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26313 00:19:21.482 [2024-11-05 16:43:28.374483] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26313: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:21.482 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:21.482 { 00:19:21.482 "nqn": "nqn.2016-06.io.spdk:cnode26313", 00:19:21.482 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:21.482 "method": "nvmf_create_subsystem", 00:19:21.482 "req_id": 1 00:19:21.482 } 00:19:21.482 Got JSON-RPC error response 00:19:21.482 response: 00:19:21.482 { 00:19:21.482 "code": -32602, 00:19:21.482 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:21.482 }' 00:19:21.482 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:21.482 { 00:19:21.482 "nqn": "nqn.2016-06.io.spdk:cnode26313", 00:19:21.482 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:21.482 "method": "nvmf_create_subsystem", 00:19:21.482 "req_id": 1 00:19:21.482 } 00:19:21.482 Got JSON-RPC error response 00:19:21.482 response: 00:19:21.482 { 00:19:21.482 "code": -32602, 00:19:21.482 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:21.482 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:21.482 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:21.482 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1461 00:19:21.743 [2024-11-05 16:43:28.559031] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1461: invalid model number 'SPDK_Controller' 00:19:21.743 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:21.743 { 00:19:21.743 "nqn": "nqn.2016-06.io.spdk:cnode1461", 00:19:21.743 "model_number": "SPDK_Controller\u001f", 00:19:21.743 "method": "nvmf_create_subsystem", 00:19:21.743 "req_id": 1 00:19:21.743 } 00:19:21.743 Got JSON-RPC error response 00:19:21.743 response: 00:19:21.743 { 00:19:21.743 "code": -32602, 00:19:21.743 "message": "Invalid MN SPDK_Controller\u001f" 00:19:21.743 }' 00:19:21.743 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:21.743 { 00:19:21.743 "nqn": "nqn.2016-06.io.spdk:cnode1461", 00:19:21.743 "model_number": "SPDK_Controller\u001f", 00:19:21.743 "method": "nvmf_create_subsystem", 00:19:21.743 "req_id": 1 00:19:21.743 } 00:19:21.743 Got JSON-RPC error response 00:19:21.743 response: 00:19:21.743 { 00:19:21.743 "code": -32602, 00:19:21.743 "message": "Invalid MN SPDK_Controller\u001f" 00:19:21.743 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:21.743 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:21.743 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '<^kP"N|jQvB{px@9HXEH,' 00:19:21.744 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '<^kP"N|jQvB{px@9HXEH,' nqn.2016-06.io.spdk:cnode10939 00:19:22.006 [2024-11-05 16:43:28.912188] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10939: invalid serial number '<^kP"N|jQvB{px@9HXEH,' 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:19:22.006 { 00:19:22.006 "nqn": "nqn.2016-06.io.spdk:cnode10939", 00:19:22.006 "serial_number": "<^kP\"N|jQvB{px@9HXEH,", 00:19:22.006 "method": "nvmf_create_subsystem", 00:19:22.006 "req_id": 1 00:19:22.006 } 00:19:22.006 Got JSON-RPC error response 00:19:22.006 response: 00:19:22.006 { 00:19:22.006 "code": -32602, 00:19:22.006 "message": "Invalid SN <^kP\"N|jQvB{px@9HXEH," 00:19:22.006 }' 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:19:22.006 { 00:19:22.006 "nqn": "nqn.2016-06.io.spdk:cnode10939", 00:19:22.006 "serial_number": "<^kP\"N|jQvB{px@9HXEH,", 00:19:22.006 "method": "nvmf_create_subsystem", 00:19:22.006 "req_id": 1 00:19:22.006 } 00:19:22.006 Got JSON-RPC error response 00:19:22.006 response: 00:19:22.006 { 00:19:22.006 "code": -32602, 00:19:22.006 "message": "Invalid SN <^kP\"N|jQvB{px@9HXEH," 00:19:22.006 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.006 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.007 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:22.269 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:19:22.270 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=SnCns.bN.34{G8.< /dev/null' 00:19:24.354 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # return 0 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@274 -- # iptr 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-save 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-restore 00:19:26.898 00:19:26.898 real 0m13.627s 00:19:26.898 user 0m20.525s 00:19:26.898 sys 0m6.384s 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.898 ************************************ 00:19:26.898 END TEST nvmf_invalid 00:19:26.898 ************************************ 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.898 ************************************ 00:19:26.898 START TEST nvmf_connect_stress 00:19:26.898 ************************************ 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:26.898 * Looking for test storage... 00:19:26.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.898 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.899 --rc genhtml_branch_coverage=1 00:19:26.899 --rc genhtml_function_coverage=1 00:19:26.899 --rc genhtml_legend=1 00:19:26.899 --rc geninfo_all_blocks=1 00:19:26.899 --rc geninfo_unexecuted_blocks=1 00:19:26.899 00:19:26.899 ' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.899 --rc genhtml_branch_coverage=1 00:19:26.899 --rc genhtml_function_coverage=1 00:19:26.899 --rc genhtml_legend=1 00:19:26.899 --rc geninfo_all_blocks=1 00:19:26.899 --rc geninfo_unexecuted_blocks=1 00:19:26.899 00:19:26.899 ' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.899 --rc genhtml_branch_coverage=1 00:19:26.899 --rc genhtml_function_coverage=1 00:19:26.899 --rc genhtml_legend=1 00:19:26.899 --rc geninfo_all_blocks=1 00:19:26.899 --rc geninfo_unexecuted_blocks=1 00:19:26.899 00:19:26.899 ' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.899 --rc genhtml_branch_coverage=1 00:19:26.899 --rc genhtml_function_coverage=1 00:19:26.899 --rc genhtml_legend=1 00:19:26.899 --rc geninfo_all_blocks=1 00:19:26.899 --rc geninfo_unexecuted_blocks=1 00:19:26.899 00:19:26.899 ' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:26.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:19:26.899 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:33.490 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:33.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:33.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:33.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:33.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:33.491 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:33.492 10.0.0.1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:33.492 10.0.0.2 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:33.492 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:33.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.698 ms 00:19:33.755 00:19:33.755 --- 10.0.0.1 ping statistics --- 00:19:33.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.755 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:33.755 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:33.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:19:33.756 00:19:33.756 --- 10.0.0.2 ping statistics --- 00:19:33.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.756 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:33.756 ' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=3104292 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 3104292 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3104292 ']' 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.756 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.018 [2024-11-05 16:43:40.868245] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:19:34.018 [2024-11-05 16:43:40.868312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.018 [2024-11-05 16:43:40.970601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:34.018 [2024-11-05 16:43:41.022693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.018 [2024-11-05 16:43:41.022758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.018 [2024-11-05 16:43:41.022767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.018 [2024-11-05 16:43:41.022775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.018 [2024-11-05 16:43:41.022781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.018 [2024-11-05 16:43:41.024657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.018 [2024-11-05 16:43:41.024829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.018 [2024-11-05 16:43:41.024849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.962 [2024-11-05 16:43:41.728054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.962 [2024-11-05 16:43:41.752533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.962 NULL1 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3104568 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.962 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.223 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.223 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:35.223 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.223 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.223 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.484 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.484 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:35.484 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.484 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.484 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.053 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.053 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:36.053 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.053 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.053 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.312 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.312 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:36.313 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.313 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.313 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.572 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.573 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:36.573 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.573 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.573 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.834 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.834 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:36.834 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.834 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.834 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.095 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.095 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:37.095 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.095 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.095 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.666 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.666 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:37.666 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.666 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.666 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.927 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.927 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:37.927 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.927 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.927 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.188 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.188 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:38.188 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.188 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.188 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.450 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.450 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:38.450 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.450 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.450 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.021 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.021 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:39.021 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.021 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.021 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:39.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.543 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.543 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:39.543 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.543 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.543 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.803 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.803 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:39.803 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.803 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.803 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.064 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.064 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:40.064 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.064 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.064 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.634 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.634 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:40.634 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.634 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.634 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.896 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.896 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:40.896 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.896 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.896 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.156 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.156 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:41.156 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.156 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.156 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.417 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.418 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:41.418 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.418 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.418 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.679 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.679 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:41.679 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.679 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.679 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.252 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.252 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:42.252 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.252 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.252 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.513 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.513 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:42.513 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.513 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.513 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.773 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.773 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:42.773 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.773 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.773 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.034 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.034 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:43.034 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.034 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.034 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.295 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.295 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:43.295 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.295 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.295 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.867 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.867 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:43.867 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.867 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.867 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.129 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.129 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:44.129 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.129 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.129 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.390 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.390 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:44.390 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.390 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.390 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:44.690 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.690 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.998 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3104568 00:19:44.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3104568) - No such process 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3104568 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:44.998 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:44.998 rmmod nvme_tcp 00:19:44.998 rmmod nvme_fabrics 00:19:44.998 rmmod nvme_keyring 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 3104292 ']' 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 3104292 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3104292 ']' 00:19:44.998 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3104292 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3104292 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3104292' 00:19:45.269 killing process with pid 3104292 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3104292 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3104292 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@254 -- # local dev 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:45.269 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # return 0 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@274 -- # iptr 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-save 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-restore 00:19:47.244 00:19:47.244 real 0m20.782s 00:19:47.244 user 0m42.136s 00:19:47.244 sys 0m8.828s 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:47.244 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.244 ************************************ 00:19:47.244 END TEST nvmf_connect_stress 00:19:47.244 ************************************ 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.505 ************************************ 00:19:47.505 START TEST nvmf_fused_ordering 00:19:47.505 ************************************ 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:47.505 * Looking for test storage... 00:19:47.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.505 --rc genhtml_branch_coverage=1 00:19:47.505 --rc genhtml_function_coverage=1 00:19:47.505 --rc genhtml_legend=1 00:19:47.505 --rc geninfo_all_blocks=1 00:19:47.505 --rc geninfo_unexecuted_blocks=1 00:19:47.505 00:19:47.505 ' 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.505 --rc genhtml_branch_coverage=1 00:19:47.505 --rc genhtml_function_coverage=1 00:19:47.505 --rc genhtml_legend=1 00:19:47.505 --rc geninfo_all_blocks=1 00:19:47.505 --rc geninfo_unexecuted_blocks=1 00:19:47.505 00:19:47.505 ' 00:19:47.505 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.505 --rc genhtml_branch_coverage=1 00:19:47.505 --rc genhtml_function_coverage=1 00:19:47.505 --rc genhtml_legend=1 00:19:47.505 --rc geninfo_all_blocks=1 00:19:47.505 --rc geninfo_unexecuted_blocks=1 00:19:47.505 00:19:47.505 ' 00:19:47.506 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:47.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.506 --rc genhtml_branch_coverage=1 00:19:47.506 --rc genhtml_function_coverage=1 00:19:47.506 --rc genhtml_legend=1 00:19:47.506 --rc geninfo_all_blocks=1 00:19:47.506 --rc geninfo_unexecuted_blocks=1 00:19:47.506 00:19:47.506 ' 00:19:47.506 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.506 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:47.767 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:47.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:19:47.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:55.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:55.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:55.912 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:55.913 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:55.913 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@247 -- # create_target_ns 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:55.913 10.0.0.1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:55.913 10.0.0.2 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:55.913 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:55.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.634 ms 00:19:55.914 00:19:55.914 --- 10.0.0.1 ping statistics --- 00:19:55.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.914 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:55.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:19:55.914 00:19:55.914 --- 10.0.0.2 ping statistics --- 00:19:55.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.914 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:55.914 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:55.914 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:55.915 ' 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=3110788 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 3110788 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3110788 ']' 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.915 [2024-11-05 16:44:02.164488] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:19:55.915 [2024-11-05 16:44:02.164557] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.915 [2024-11-05 16:44:02.266004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.915 [2024-11-05 16:44:02.315673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.915 [2024-11-05 16:44:02.315726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.915 [2024-11-05 16:44:02.315734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.915 [2024-11-05 16:44:02.315741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.915 [2024-11-05 16:44:02.315758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.915 [2024-11-05 16:44:02.316510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.915 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 [2024-11-05 16:44:03.013734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 [2024-11-05 16:44:03.029997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 NULL1 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.177 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:56.177 [2024-11-05 16:44:03.087631] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:19:56.177 [2024-11-05 16:44:03.087674] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111061 ] 00:19:56.439 Attached to nqn.2016-06.io.spdk:cnode1 00:19:56.439 Namespace ID: 1 size: 1GB 00:19:56.439 fused_ordering(0) 00:19:56.439 fused_ordering(1) 00:19:56.439 fused_ordering(2) 00:19:56.439 fused_ordering(3) 00:19:56.439 fused_ordering(4) 00:19:56.439 fused_ordering(5) 00:19:56.439 fused_ordering(6) 00:19:56.439 fused_ordering(7) 00:19:56.439 fused_ordering(8) 00:19:56.439 fused_ordering(9) 00:19:56.439 fused_ordering(10) 00:19:56.439 fused_ordering(11) 00:19:56.439 fused_ordering(12) 00:19:56.439 fused_ordering(13) 00:19:56.439 fused_ordering(14) 00:19:56.439 fused_ordering(15) 00:19:56.439 fused_ordering(16) 00:19:56.439 fused_ordering(17) 00:19:56.439 fused_ordering(18) 00:19:56.439 fused_ordering(19) 00:19:56.439 fused_ordering(20) 00:19:56.439 fused_ordering(21) 00:19:56.439 fused_ordering(22) 00:19:56.439 fused_ordering(23) 00:19:56.439 fused_ordering(24) 00:19:56.439 fused_ordering(25) 00:19:56.439 fused_ordering(26) 00:19:56.439 fused_ordering(27) 00:19:56.439 fused_ordering(28) 00:19:56.439 fused_ordering(29) 00:19:56.439 fused_ordering(30) 00:19:56.439 fused_ordering(31) 00:19:56.439 fused_ordering(32) 00:19:56.439 fused_ordering(33) 00:19:56.439 fused_ordering(34) 00:19:56.439 fused_ordering(35) 00:19:56.439 fused_ordering(36) 00:19:56.439 fused_ordering(37) 00:19:56.439 fused_ordering(38) 00:19:56.439 fused_ordering(39) 00:19:56.439 fused_ordering(40) 00:19:56.439 fused_ordering(41) 00:19:56.439 fused_ordering(42) 00:19:56.439 fused_ordering(43) 00:19:56.439 fused_ordering(44) 00:19:56.439 fused_ordering(45) 00:19:56.439 fused_ordering(46) 00:19:56.439 fused_ordering(47) 00:19:56.439 fused_ordering(48) 00:19:56.439 fused_ordering(49) 00:19:56.439 fused_ordering(50) 00:19:56.439 fused_ordering(51) 00:19:56.439 fused_ordering(52) 00:19:56.439 fused_ordering(53) 00:19:56.439 fused_ordering(54) 00:19:56.439 fused_ordering(55) 00:19:56.439 fused_ordering(56) 00:19:56.439 fused_ordering(57) 00:19:56.439 fused_ordering(58) 00:19:56.439 fused_ordering(59) 00:19:56.439 fused_ordering(60) 00:19:56.439 fused_ordering(61) 00:19:56.439 fused_ordering(62) 00:19:56.439 fused_ordering(63) 00:19:56.439 fused_ordering(64) 00:19:56.439 fused_ordering(65) 00:19:56.439 fused_ordering(66) 00:19:56.439 fused_ordering(67) 00:19:56.439 fused_ordering(68) 00:19:56.439 fused_ordering(69) 00:19:56.439 fused_ordering(70) 00:19:56.439 fused_ordering(71) 00:19:56.439 fused_ordering(72) 00:19:56.439 fused_ordering(73) 00:19:56.439 fused_ordering(74) 00:19:56.439 fused_ordering(75) 00:19:56.439 fused_ordering(76) 00:19:56.439 fused_ordering(77) 00:19:56.439 fused_ordering(78) 00:19:56.439 fused_ordering(79) 00:19:56.439 fused_ordering(80) 00:19:56.439 fused_ordering(81) 00:19:56.439 fused_ordering(82) 00:19:56.439 fused_ordering(83) 00:19:56.439 fused_ordering(84) 00:19:56.439 fused_ordering(85) 00:19:56.439 fused_ordering(86) 00:19:56.439 fused_ordering(87) 00:19:56.439 fused_ordering(88) 00:19:56.439 fused_ordering(89) 00:19:56.439 fused_ordering(90) 00:19:56.439 fused_ordering(91) 00:19:56.439 fused_ordering(92) 00:19:56.439 fused_ordering(93) 00:19:56.439 fused_ordering(94) 00:19:56.439 fused_ordering(95) 00:19:56.439 fused_ordering(96) 00:19:56.439 fused_ordering(97) 00:19:56.439 fused_ordering(98) 00:19:56.439 fused_ordering(99) 00:19:56.439 fused_ordering(100) 00:19:56.439 fused_ordering(101) 00:19:56.439 fused_ordering(102) 00:19:56.439 fused_ordering(103) 00:19:56.439 fused_ordering(104) 00:19:56.439 fused_ordering(105) 00:19:56.439 fused_ordering(106) 00:19:56.439 fused_ordering(107) 00:19:56.439 fused_ordering(108) 00:19:56.439 fused_ordering(109) 00:19:56.439 fused_ordering(110) 00:19:56.439 fused_ordering(111) 00:19:56.439 fused_ordering(112) 00:19:56.439 fused_ordering(113) 00:19:56.439 fused_ordering(114) 00:19:56.439 fused_ordering(115) 00:19:56.439 fused_ordering(116) 00:19:56.439 fused_ordering(117) 00:19:56.439 fused_ordering(118) 00:19:56.439 fused_ordering(119) 00:19:56.439 fused_ordering(120) 00:19:56.439 fused_ordering(121) 00:19:56.439 fused_ordering(122) 00:19:56.439 fused_ordering(123) 00:19:56.439 fused_ordering(124) 00:19:56.439 fused_ordering(125) 00:19:56.439 fused_ordering(126) 00:19:56.439 fused_ordering(127) 00:19:56.439 fused_ordering(128) 00:19:56.439 fused_ordering(129) 00:19:56.439 fused_ordering(130) 00:19:56.439 fused_ordering(131) 00:19:56.439 fused_ordering(132) 00:19:56.439 fused_ordering(133) 00:19:56.439 fused_ordering(134) 00:19:56.439 fused_ordering(135) 00:19:56.439 fused_ordering(136) 00:19:56.439 fused_ordering(137) 00:19:56.439 fused_ordering(138) 00:19:56.439 fused_ordering(139) 00:19:56.439 fused_ordering(140) 00:19:56.439 fused_ordering(141) 00:19:56.439 fused_ordering(142) 00:19:56.439 fused_ordering(143) 00:19:56.439 fused_ordering(144) 00:19:56.439 fused_ordering(145) 00:19:56.439 fused_ordering(146) 00:19:56.439 fused_ordering(147) 00:19:56.439 fused_ordering(148) 00:19:56.439 fused_ordering(149) 00:19:56.439 fused_ordering(150) 00:19:56.439 fused_ordering(151) 00:19:56.439 fused_ordering(152) 00:19:56.439 fused_ordering(153) 00:19:56.439 fused_ordering(154) 00:19:56.439 fused_ordering(155) 00:19:56.439 fused_ordering(156) 00:19:56.439 fused_ordering(157) 00:19:56.439 fused_ordering(158) 00:19:56.439 fused_ordering(159) 00:19:56.439 fused_ordering(160) 00:19:56.439 fused_ordering(161) 00:19:56.439 fused_ordering(162) 00:19:56.439 fused_ordering(163) 00:19:56.439 fused_ordering(164) 00:19:56.439 fused_ordering(165) 00:19:56.439 fused_ordering(166) 00:19:56.439 fused_ordering(167) 00:19:56.439 fused_ordering(168) 00:19:56.439 fused_ordering(169) 00:19:56.439 fused_ordering(170) 00:19:56.439 fused_ordering(171) 00:19:56.439 fused_ordering(172) 00:19:56.439 fused_ordering(173) 00:19:56.439 fused_ordering(174) 00:19:56.440 fused_ordering(175) 00:19:56.440 fused_ordering(176) 00:19:56.440 fused_ordering(177) 00:19:56.440 fused_ordering(178) 00:19:56.440 fused_ordering(179) 00:19:56.440 fused_ordering(180) 00:19:56.440 fused_ordering(181) 00:19:56.440 fused_ordering(182) 00:19:56.440 fused_ordering(183) 00:19:56.440 fused_ordering(184) 00:19:56.440 fused_ordering(185) 00:19:56.440 fused_ordering(186) 00:19:56.440 fused_ordering(187) 00:19:56.440 fused_ordering(188) 00:19:56.440 fused_ordering(189) 00:19:56.440 fused_ordering(190) 00:19:56.440 fused_ordering(191) 00:19:56.440 fused_ordering(192) 00:19:56.440 fused_ordering(193) 00:19:56.440 fused_ordering(194) 00:19:56.440 fused_ordering(195) 00:19:56.440 fused_ordering(196) 00:19:56.440 fused_ordering(197) 00:19:56.440 fused_ordering(198) 00:19:56.440 fused_ordering(199) 00:19:56.440 fused_ordering(200) 00:19:56.440 fused_ordering(201) 00:19:56.440 fused_ordering(202) 00:19:56.440 fused_ordering(203) 00:19:56.440 fused_ordering(204) 00:19:56.440 fused_ordering(205) 00:19:57.011 fused_ordering(206) 00:19:57.011 fused_ordering(207) 00:19:57.011 fused_ordering(208) 00:19:57.011 fused_ordering(209) 00:19:57.011 fused_ordering(210) 00:19:57.011 fused_ordering(211) 00:19:57.011 fused_ordering(212) 00:19:57.011 fused_ordering(213) 00:19:57.011 fused_ordering(214) 00:19:57.011 fused_ordering(215) 00:19:57.011 fused_ordering(216) 00:19:57.011 fused_ordering(217) 00:19:57.011 fused_ordering(218) 00:19:57.011 fused_ordering(219) 00:19:57.011 fused_ordering(220) 00:19:57.011 fused_ordering(221) 00:19:57.011 fused_ordering(222) 00:19:57.011 fused_ordering(223) 00:19:57.011 fused_ordering(224) 00:19:57.011 fused_ordering(225) 00:19:57.011 fused_ordering(226) 00:19:57.011 fused_ordering(227) 00:19:57.011 fused_ordering(228) 00:19:57.011 fused_ordering(229) 00:19:57.011 fused_ordering(230) 00:19:57.011 fused_ordering(231) 00:19:57.011 fused_ordering(232) 00:19:57.011 fused_ordering(233) 00:19:57.011 fused_ordering(234) 00:19:57.011 fused_ordering(235) 00:19:57.011 fused_ordering(236) 00:19:57.011 fused_ordering(237) 00:19:57.011 fused_ordering(238) 00:19:57.011 fused_ordering(239) 00:19:57.011 fused_ordering(240) 00:19:57.011 fused_ordering(241) 00:19:57.011 fused_ordering(242) 00:19:57.011 fused_ordering(243) 00:19:57.011 fused_ordering(244) 00:19:57.011 fused_ordering(245) 00:19:57.011 fused_ordering(246) 00:19:57.011 fused_ordering(247) 00:19:57.011 fused_ordering(248) 00:19:57.011 fused_ordering(249) 00:19:57.011 fused_ordering(250) 00:19:57.011 fused_ordering(251) 00:19:57.011 fused_ordering(252) 00:19:57.011 fused_ordering(253) 00:19:57.011 fused_ordering(254) 00:19:57.011 fused_ordering(255) 00:19:57.011 fused_ordering(256) 00:19:57.011 fused_ordering(257) 00:19:57.011 fused_ordering(258) 00:19:57.011 fused_ordering(259) 00:19:57.011 fused_ordering(260) 00:19:57.011 fused_ordering(261) 00:19:57.011 fused_ordering(262) 00:19:57.011 fused_ordering(263) 00:19:57.011 fused_ordering(264) 00:19:57.011 fused_ordering(265) 00:19:57.011 fused_ordering(266) 00:19:57.011 fused_ordering(267) 00:19:57.011 fused_ordering(268) 00:19:57.011 fused_ordering(269) 00:19:57.011 fused_ordering(270) 00:19:57.011 fused_ordering(271) 00:19:57.011 fused_ordering(272) 00:19:57.011 fused_ordering(273) 00:19:57.011 fused_ordering(274) 00:19:57.011 fused_ordering(275) 00:19:57.011 fused_ordering(276) 00:19:57.011 fused_ordering(277) 00:19:57.012 fused_ordering(278) 00:19:57.012 fused_ordering(279) 00:19:57.012 fused_ordering(280) 00:19:57.012 fused_ordering(281) 00:19:57.012 fused_ordering(282) 00:19:57.012 fused_ordering(283) 00:19:57.012 fused_ordering(284) 00:19:57.012 fused_ordering(285) 00:19:57.012 fused_ordering(286) 00:19:57.012 fused_ordering(287) 00:19:57.012 fused_ordering(288) 00:19:57.012 fused_ordering(289) 00:19:57.012 fused_ordering(290) 00:19:57.012 fused_ordering(291) 00:19:57.012 fused_ordering(292) 00:19:57.012 fused_ordering(293) 00:19:57.012 fused_ordering(294) 00:19:57.012 fused_ordering(295) 00:19:57.012 fused_ordering(296) 00:19:57.012 fused_ordering(297) 00:19:57.012 fused_ordering(298) 00:19:57.012 fused_ordering(299) 00:19:57.012 fused_ordering(300) 00:19:57.012 fused_ordering(301) 00:19:57.012 fused_ordering(302) 00:19:57.012 fused_ordering(303) 00:19:57.012 fused_ordering(304) 00:19:57.012 fused_ordering(305) 00:19:57.012 fused_ordering(306) 00:19:57.012 fused_ordering(307) 00:19:57.012 fused_ordering(308) 00:19:57.012 fused_ordering(309) 00:19:57.012 fused_ordering(310) 00:19:57.012 fused_ordering(311) 00:19:57.012 fused_ordering(312) 00:19:57.012 fused_ordering(313) 00:19:57.012 fused_ordering(314) 00:19:57.012 fused_ordering(315) 00:19:57.012 fused_ordering(316) 00:19:57.012 fused_ordering(317) 00:19:57.012 fused_ordering(318) 00:19:57.012 fused_ordering(319) 00:19:57.012 fused_ordering(320) 00:19:57.012 fused_ordering(321) 00:19:57.012 fused_ordering(322) 00:19:57.012 fused_ordering(323) 00:19:57.012 fused_ordering(324) 00:19:57.012 fused_ordering(325) 00:19:57.012 fused_ordering(326) 00:19:57.012 fused_ordering(327) 00:19:57.012 fused_ordering(328) 00:19:57.012 fused_ordering(329) 00:19:57.012 fused_ordering(330) 00:19:57.012 fused_ordering(331) 00:19:57.012 fused_ordering(332) 00:19:57.012 fused_ordering(333) 00:19:57.012 fused_ordering(334) 00:19:57.012 fused_ordering(335) 00:19:57.012 fused_ordering(336) 00:19:57.012 fused_ordering(337) 00:19:57.012 fused_ordering(338) 00:19:57.012 fused_ordering(339) 00:19:57.012 fused_ordering(340) 00:19:57.012 fused_ordering(341) 00:19:57.012 fused_ordering(342) 00:19:57.012 fused_ordering(343) 00:19:57.012 fused_ordering(344) 00:19:57.012 fused_ordering(345) 00:19:57.012 fused_ordering(346) 00:19:57.012 fused_ordering(347) 00:19:57.012 fused_ordering(348) 00:19:57.012 fused_ordering(349) 00:19:57.012 fused_ordering(350) 00:19:57.012 fused_ordering(351) 00:19:57.012 fused_ordering(352) 00:19:57.012 fused_ordering(353) 00:19:57.012 fused_ordering(354) 00:19:57.012 fused_ordering(355) 00:19:57.012 fused_ordering(356) 00:19:57.012 fused_ordering(357) 00:19:57.012 fused_ordering(358) 00:19:57.012 fused_ordering(359) 00:19:57.012 fused_ordering(360) 00:19:57.012 fused_ordering(361) 00:19:57.012 fused_ordering(362) 00:19:57.012 fused_ordering(363) 00:19:57.012 fused_ordering(364) 00:19:57.012 fused_ordering(365) 00:19:57.012 fused_ordering(366) 00:19:57.012 fused_ordering(367) 00:19:57.012 fused_ordering(368) 00:19:57.012 fused_ordering(369) 00:19:57.012 fused_ordering(370) 00:19:57.012 fused_ordering(371) 00:19:57.012 fused_ordering(372) 00:19:57.012 fused_ordering(373) 00:19:57.012 fused_ordering(374) 00:19:57.012 fused_ordering(375) 00:19:57.012 fused_ordering(376) 00:19:57.012 fused_ordering(377) 00:19:57.012 fused_ordering(378) 00:19:57.012 fused_ordering(379) 00:19:57.012 fused_ordering(380) 00:19:57.012 fused_ordering(381) 00:19:57.012 fused_ordering(382) 00:19:57.012 fused_ordering(383) 00:19:57.012 fused_ordering(384) 00:19:57.012 fused_ordering(385) 00:19:57.012 fused_ordering(386) 00:19:57.012 fused_ordering(387) 00:19:57.012 fused_ordering(388) 00:19:57.012 fused_ordering(389) 00:19:57.012 fused_ordering(390) 00:19:57.012 fused_ordering(391) 00:19:57.012 fused_ordering(392) 00:19:57.012 fused_ordering(393) 00:19:57.012 fused_ordering(394) 00:19:57.012 fused_ordering(395) 00:19:57.012 fused_ordering(396) 00:19:57.012 fused_ordering(397) 00:19:57.012 fused_ordering(398) 00:19:57.012 fused_ordering(399) 00:19:57.012 fused_ordering(400) 00:19:57.012 fused_ordering(401) 00:19:57.012 fused_ordering(402) 00:19:57.012 fused_ordering(403) 00:19:57.012 fused_ordering(404) 00:19:57.012 fused_ordering(405) 00:19:57.012 fused_ordering(406) 00:19:57.012 fused_ordering(407) 00:19:57.012 fused_ordering(408) 00:19:57.012 fused_ordering(409) 00:19:57.012 fused_ordering(410) 00:19:57.273 fused_ordering(411) 00:19:57.273 fused_ordering(412) 00:19:57.273 fused_ordering(413) 00:19:57.273 fused_ordering(414) 00:19:57.273 fused_ordering(415) 00:19:57.273 fused_ordering(416) 00:19:57.273 fused_ordering(417) 00:19:57.274 fused_ordering(418) 00:19:57.274 fused_ordering(419) 00:19:57.274 fused_ordering(420) 00:19:57.274 fused_ordering(421) 00:19:57.274 fused_ordering(422) 00:19:57.274 fused_ordering(423) 00:19:57.274 fused_ordering(424) 00:19:57.274 fused_ordering(425) 00:19:57.274 fused_ordering(426) 00:19:57.274 fused_ordering(427) 00:19:57.274 fused_ordering(428) 00:19:57.274 fused_ordering(429) 00:19:57.274 fused_ordering(430) 00:19:57.274 fused_ordering(431) 00:19:57.274 fused_ordering(432) 00:19:57.274 fused_ordering(433) 00:19:57.274 fused_ordering(434) 00:19:57.274 fused_ordering(435) 00:19:57.274 fused_ordering(436) 00:19:57.274 fused_ordering(437) 00:19:57.274 fused_ordering(438) 00:19:57.274 fused_ordering(439) 00:19:57.274 fused_ordering(440) 00:19:57.274 fused_ordering(441) 00:19:57.274 fused_ordering(442) 00:19:57.274 fused_ordering(443) 00:19:57.274 fused_ordering(444) 00:19:57.274 fused_ordering(445) 00:19:57.274 fused_ordering(446) 00:19:57.274 fused_ordering(447) 00:19:57.274 fused_ordering(448) 00:19:57.274 fused_ordering(449) 00:19:57.274 fused_ordering(450) 00:19:57.274 fused_ordering(451) 00:19:57.274 fused_ordering(452) 00:19:57.274 fused_ordering(453) 00:19:57.274 fused_ordering(454) 00:19:57.274 fused_ordering(455) 00:19:57.274 fused_ordering(456) 00:19:57.274 fused_ordering(457) 00:19:57.274 fused_ordering(458) 00:19:57.274 fused_ordering(459) 00:19:57.274 fused_ordering(460) 00:19:57.274 fused_ordering(461) 00:19:57.274 fused_ordering(462) 00:19:57.274 fused_ordering(463) 00:19:57.274 fused_ordering(464) 00:19:57.274 fused_ordering(465) 00:19:57.274 fused_ordering(466) 00:19:57.274 fused_ordering(467) 00:19:57.274 fused_ordering(468) 00:19:57.274 fused_ordering(469) 00:19:57.274 fused_ordering(470) 00:19:57.274 fused_ordering(471) 00:19:57.274 fused_ordering(472) 00:19:57.274 fused_ordering(473) 00:19:57.274 fused_ordering(474) 00:19:57.274 fused_ordering(475) 00:19:57.274 fused_ordering(476) 00:19:57.274 fused_ordering(477) 00:19:57.274 fused_ordering(478) 00:19:57.274 fused_ordering(479) 00:19:57.274 fused_ordering(480) 00:19:57.274 fused_ordering(481) 00:19:57.274 fused_ordering(482) 00:19:57.274 fused_ordering(483) 00:19:57.274 fused_ordering(484) 00:19:57.274 fused_ordering(485) 00:19:57.274 fused_ordering(486) 00:19:57.274 fused_ordering(487) 00:19:57.274 fused_ordering(488) 00:19:57.274 fused_ordering(489) 00:19:57.274 fused_ordering(490) 00:19:57.274 fused_ordering(491) 00:19:57.274 fused_ordering(492) 00:19:57.274 fused_ordering(493) 00:19:57.274 fused_ordering(494) 00:19:57.274 fused_ordering(495) 00:19:57.274 fused_ordering(496) 00:19:57.274 fused_ordering(497) 00:19:57.274 fused_ordering(498) 00:19:57.274 fused_ordering(499) 00:19:57.274 fused_ordering(500) 00:19:57.274 fused_ordering(501) 00:19:57.274 fused_ordering(502) 00:19:57.274 fused_ordering(503) 00:19:57.274 fused_ordering(504) 00:19:57.274 fused_ordering(505) 00:19:57.274 fused_ordering(506) 00:19:57.274 fused_ordering(507) 00:19:57.274 fused_ordering(508) 00:19:57.274 fused_ordering(509) 00:19:57.274 fused_ordering(510) 00:19:57.274 fused_ordering(511) 00:19:57.274 fused_ordering(512) 00:19:57.274 fused_ordering(513) 00:19:57.274 fused_ordering(514) 00:19:57.274 fused_ordering(515) 00:19:57.274 fused_ordering(516) 00:19:57.274 fused_ordering(517) 00:19:57.274 fused_ordering(518) 00:19:57.274 fused_ordering(519) 00:19:57.274 fused_ordering(520) 00:19:57.274 fused_ordering(521) 00:19:57.274 fused_ordering(522) 00:19:57.274 fused_ordering(523) 00:19:57.274 fused_ordering(524) 00:19:57.274 fused_ordering(525) 00:19:57.274 fused_ordering(526) 00:19:57.274 fused_ordering(527) 00:19:57.274 fused_ordering(528) 00:19:57.274 fused_ordering(529) 00:19:57.274 fused_ordering(530) 00:19:57.274 fused_ordering(531) 00:19:57.274 fused_ordering(532) 00:19:57.274 fused_ordering(533) 00:19:57.274 fused_ordering(534) 00:19:57.274 fused_ordering(535) 00:19:57.274 fused_ordering(536) 00:19:57.274 fused_ordering(537) 00:19:57.274 fused_ordering(538) 00:19:57.274 fused_ordering(539) 00:19:57.274 fused_ordering(540) 00:19:57.274 fused_ordering(541) 00:19:57.274 fused_ordering(542) 00:19:57.274 fused_ordering(543) 00:19:57.274 fused_ordering(544) 00:19:57.274 fused_ordering(545) 00:19:57.274 fused_ordering(546) 00:19:57.274 fused_ordering(547) 00:19:57.274 fused_ordering(548) 00:19:57.274 fused_ordering(549) 00:19:57.274 fused_ordering(550) 00:19:57.274 fused_ordering(551) 00:19:57.274 fused_ordering(552) 00:19:57.274 fused_ordering(553) 00:19:57.274 fused_ordering(554) 00:19:57.274 fused_ordering(555) 00:19:57.274 fused_ordering(556) 00:19:57.274 fused_ordering(557) 00:19:57.274 fused_ordering(558) 00:19:57.274 fused_ordering(559) 00:19:57.274 fused_ordering(560) 00:19:57.274 fused_ordering(561) 00:19:57.274 fused_ordering(562) 00:19:57.274 fused_ordering(563) 00:19:57.274 fused_ordering(564) 00:19:57.274 fused_ordering(565) 00:19:57.274 fused_ordering(566) 00:19:57.274 fused_ordering(567) 00:19:57.274 fused_ordering(568) 00:19:57.274 fused_ordering(569) 00:19:57.274 fused_ordering(570) 00:19:57.274 fused_ordering(571) 00:19:57.274 fused_ordering(572) 00:19:57.274 fused_ordering(573) 00:19:57.274 fused_ordering(574) 00:19:57.274 fused_ordering(575) 00:19:57.274 fused_ordering(576) 00:19:57.274 fused_ordering(577) 00:19:57.274 fused_ordering(578) 00:19:57.274 fused_ordering(579) 00:19:57.274 fused_ordering(580) 00:19:57.274 fused_ordering(581) 00:19:57.274 fused_ordering(582) 00:19:57.274 fused_ordering(583) 00:19:57.275 fused_ordering(584) 00:19:57.275 fused_ordering(585) 00:19:57.275 fused_ordering(586) 00:19:57.275 fused_ordering(587) 00:19:57.275 fused_ordering(588) 00:19:57.275 fused_ordering(589) 00:19:57.275 fused_ordering(590) 00:19:57.275 fused_ordering(591) 00:19:57.275 fused_ordering(592) 00:19:57.275 fused_ordering(593) 00:19:57.275 fused_ordering(594) 00:19:57.275 fused_ordering(595) 00:19:57.275 fused_ordering(596) 00:19:57.275 fused_ordering(597) 00:19:57.275 fused_ordering(598) 00:19:57.275 fused_ordering(599) 00:19:57.275 fused_ordering(600) 00:19:57.275 fused_ordering(601) 00:19:57.275 fused_ordering(602) 00:19:57.275 fused_ordering(603) 00:19:57.275 fused_ordering(604) 00:19:57.275 fused_ordering(605) 00:19:57.275 fused_ordering(606) 00:19:57.275 fused_ordering(607) 00:19:57.275 fused_ordering(608) 00:19:57.275 fused_ordering(609) 00:19:57.275 fused_ordering(610) 00:19:57.275 fused_ordering(611) 00:19:57.275 fused_ordering(612) 00:19:57.275 fused_ordering(613) 00:19:57.275 fused_ordering(614) 00:19:57.275 fused_ordering(615) 00:19:57.847 fused_ordering(616) 00:19:57.847 fused_ordering(617) 00:19:57.847 fused_ordering(618) 00:19:57.847 fused_ordering(619) 00:19:57.847 fused_ordering(620) 00:19:57.847 fused_ordering(621) 00:19:57.847 fused_ordering(622) 00:19:57.847 fused_ordering(623) 00:19:57.847 fused_ordering(624) 00:19:57.847 fused_ordering(625) 00:19:57.847 fused_ordering(626) 00:19:57.847 fused_ordering(627) 00:19:57.847 fused_ordering(628) 00:19:57.847 fused_ordering(629) 00:19:57.847 fused_ordering(630) 00:19:57.847 fused_ordering(631) 00:19:57.847 fused_ordering(632) 00:19:57.847 fused_ordering(633) 00:19:57.847 fused_ordering(634) 00:19:57.847 fused_ordering(635) 00:19:57.847 fused_ordering(636) 00:19:57.847 fused_ordering(637) 00:19:57.847 fused_ordering(638) 00:19:57.847 fused_ordering(639) 00:19:57.847 fused_ordering(640) 00:19:57.847 fused_ordering(641) 00:19:57.847 fused_ordering(642) 00:19:57.847 fused_ordering(643) 00:19:57.847 fused_ordering(644) 00:19:57.847 fused_ordering(645) 00:19:57.847 fused_ordering(646) 00:19:57.847 fused_ordering(647) 00:19:57.847 fused_ordering(648) 00:19:57.847 fused_ordering(649) 00:19:57.847 fused_ordering(650) 00:19:57.847 fused_ordering(651) 00:19:57.847 fused_ordering(652) 00:19:57.847 fused_ordering(653) 00:19:57.847 fused_ordering(654) 00:19:57.847 fused_ordering(655) 00:19:57.847 fused_ordering(656) 00:19:57.847 fused_ordering(657) 00:19:57.847 fused_ordering(658) 00:19:57.847 fused_ordering(659) 00:19:57.847 fused_ordering(660) 00:19:57.847 fused_ordering(661) 00:19:57.847 fused_ordering(662) 00:19:57.847 fused_ordering(663) 00:19:57.847 fused_ordering(664) 00:19:57.847 fused_ordering(665) 00:19:57.847 fused_ordering(666) 00:19:57.847 fused_ordering(667) 00:19:57.847 fused_ordering(668) 00:19:57.847 fused_ordering(669) 00:19:57.847 fused_ordering(670) 00:19:57.847 fused_ordering(671) 00:19:57.847 fused_ordering(672) 00:19:57.847 fused_ordering(673) 00:19:57.847 fused_ordering(674) 00:19:57.847 fused_ordering(675) 00:19:57.847 fused_ordering(676) 00:19:57.847 fused_ordering(677) 00:19:57.847 fused_ordering(678) 00:19:57.847 fused_ordering(679) 00:19:57.847 fused_ordering(680) 00:19:57.847 fused_ordering(681) 00:19:57.847 fused_ordering(682) 00:19:57.847 fused_ordering(683) 00:19:57.847 fused_ordering(684) 00:19:57.847 fused_ordering(685) 00:19:57.847 fused_ordering(686) 00:19:57.847 fused_ordering(687) 00:19:57.847 fused_ordering(688) 00:19:57.847 fused_ordering(689) 00:19:57.847 fused_ordering(690) 00:19:57.847 fused_ordering(691) 00:19:57.847 fused_ordering(692) 00:19:57.847 fused_ordering(693) 00:19:57.847 fused_ordering(694) 00:19:57.847 fused_ordering(695) 00:19:57.847 fused_ordering(696) 00:19:57.847 fused_ordering(697) 00:19:57.847 fused_ordering(698) 00:19:57.847 fused_ordering(699) 00:19:57.847 fused_ordering(700) 00:19:57.847 fused_ordering(701) 00:19:57.847 fused_ordering(702) 00:19:57.847 fused_ordering(703) 00:19:57.847 fused_ordering(704) 00:19:57.847 fused_ordering(705) 00:19:57.847 fused_ordering(706) 00:19:57.847 fused_ordering(707) 00:19:57.847 fused_ordering(708) 00:19:57.847 fused_ordering(709) 00:19:57.847 fused_ordering(710) 00:19:57.847 fused_ordering(711) 00:19:57.847 fused_ordering(712) 00:19:57.847 fused_ordering(713) 00:19:57.847 fused_ordering(714) 00:19:57.847 fused_ordering(715) 00:19:57.847 fused_ordering(716) 00:19:57.847 fused_ordering(717) 00:19:57.847 fused_ordering(718) 00:19:57.847 fused_ordering(719) 00:19:57.847 fused_ordering(720) 00:19:57.847 fused_ordering(721) 00:19:57.847 fused_ordering(722) 00:19:57.847 fused_ordering(723) 00:19:57.847 fused_ordering(724) 00:19:57.847 fused_ordering(725) 00:19:57.847 fused_ordering(726) 00:19:57.847 fused_ordering(727) 00:19:57.847 fused_ordering(728) 00:19:57.847 fused_ordering(729) 00:19:57.847 fused_ordering(730) 00:19:57.847 fused_ordering(731) 00:19:57.847 fused_ordering(732) 00:19:57.847 fused_ordering(733) 00:19:57.847 fused_ordering(734) 00:19:57.847 fused_ordering(735) 00:19:57.847 fused_ordering(736) 00:19:57.847 fused_ordering(737) 00:19:57.847 fused_ordering(738) 00:19:57.847 fused_ordering(739) 00:19:57.847 fused_ordering(740) 00:19:57.847 fused_ordering(741) 00:19:57.847 fused_ordering(742) 00:19:57.847 fused_ordering(743) 00:19:57.847 fused_ordering(744) 00:19:57.847 fused_ordering(745) 00:19:57.847 fused_ordering(746) 00:19:57.847 fused_ordering(747) 00:19:57.847 fused_ordering(748) 00:19:57.847 fused_ordering(749) 00:19:57.847 fused_ordering(750) 00:19:57.847 fused_ordering(751) 00:19:57.847 fused_ordering(752) 00:19:57.847 fused_ordering(753) 00:19:57.847 fused_ordering(754) 00:19:57.847 fused_ordering(755) 00:19:57.847 fused_ordering(756) 00:19:57.847 fused_ordering(757) 00:19:57.847 fused_ordering(758) 00:19:57.847 fused_ordering(759) 00:19:57.847 fused_ordering(760) 00:19:57.847 fused_ordering(761) 00:19:57.847 fused_ordering(762) 00:19:57.847 fused_ordering(763) 00:19:57.847 fused_ordering(764) 00:19:57.847 fused_ordering(765) 00:19:57.847 fused_ordering(766) 00:19:57.847 fused_ordering(767) 00:19:57.847 fused_ordering(768) 00:19:57.847 fused_ordering(769) 00:19:57.847 fused_ordering(770) 00:19:57.847 fused_ordering(771) 00:19:57.847 fused_ordering(772) 00:19:57.847 fused_ordering(773) 00:19:57.847 fused_ordering(774) 00:19:57.847 fused_ordering(775) 00:19:57.847 fused_ordering(776) 00:19:57.847 fused_ordering(777) 00:19:57.847 fused_ordering(778) 00:19:57.847 fused_ordering(779) 00:19:57.847 fused_ordering(780) 00:19:57.847 fused_ordering(781) 00:19:57.847 fused_ordering(782) 00:19:57.847 fused_ordering(783) 00:19:57.847 fused_ordering(784) 00:19:57.847 fused_ordering(785) 00:19:57.847 fused_ordering(786) 00:19:57.847 fused_ordering(787) 00:19:57.847 fused_ordering(788) 00:19:57.847 fused_ordering(789) 00:19:57.847 fused_ordering(790) 00:19:57.847 fused_ordering(791) 00:19:57.847 fused_ordering(792) 00:19:57.847 fused_ordering(793) 00:19:57.847 fused_ordering(794) 00:19:57.847 fused_ordering(795) 00:19:57.847 fused_ordering(796) 00:19:57.847 fused_ordering(797) 00:19:57.847 fused_ordering(798) 00:19:57.847 fused_ordering(799) 00:19:57.847 fused_ordering(800) 00:19:57.847 fused_ordering(801) 00:19:57.847 fused_ordering(802) 00:19:57.847 fused_ordering(803) 00:19:57.847 fused_ordering(804) 00:19:57.847 fused_ordering(805) 00:19:57.847 fused_ordering(806) 00:19:57.847 fused_ordering(807) 00:19:57.847 fused_ordering(808) 00:19:57.847 fused_ordering(809) 00:19:57.847 fused_ordering(810) 00:19:57.847 fused_ordering(811) 00:19:57.847 fused_ordering(812) 00:19:57.847 fused_ordering(813) 00:19:57.847 fused_ordering(814) 00:19:57.847 fused_ordering(815) 00:19:57.847 fused_ordering(816) 00:19:57.847 fused_ordering(817) 00:19:57.847 fused_ordering(818) 00:19:57.847 fused_ordering(819) 00:19:57.847 fused_ordering(820) 00:19:58.419 fused_ordering(821) 00:19:58.419 fused_ordering(822) 00:19:58.419 fused_ordering(823) 00:19:58.419 fused_ordering(824) 00:19:58.419 fused_ordering(825) 00:19:58.419 fused_ordering(826) 00:19:58.419 fused_ordering(827) 00:19:58.419 fused_ordering(828) 00:19:58.419 fused_ordering(829) 00:19:58.419 fused_ordering(830) 00:19:58.419 fused_ordering(831) 00:19:58.419 fused_ordering(832) 00:19:58.419 fused_ordering(833) 00:19:58.419 fused_ordering(834) 00:19:58.419 fused_ordering(835) 00:19:58.419 fused_ordering(836) 00:19:58.419 fused_ordering(837) 00:19:58.419 fused_ordering(838) 00:19:58.419 fused_ordering(839) 00:19:58.419 fused_ordering(840) 00:19:58.419 fused_ordering(841) 00:19:58.419 fused_ordering(842) 00:19:58.419 fused_ordering(843) 00:19:58.419 fused_ordering(844) 00:19:58.419 fused_ordering(845) 00:19:58.419 fused_ordering(846) 00:19:58.419 fused_ordering(847) 00:19:58.419 fused_ordering(848) 00:19:58.419 fused_ordering(849) 00:19:58.419 fused_ordering(850) 00:19:58.419 fused_ordering(851) 00:19:58.419 fused_ordering(852) 00:19:58.419 fused_ordering(853) 00:19:58.419 fused_ordering(854) 00:19:58.419 fused_ordering(855) 00:19:58.419 fused_ordering(856) 00:19:58.419 fused_ordering(857) 00:19:58.419 fused_ordering(858) 00:19:58.419 fused_ordering(859) 00:19:58.419 fused_ordering(860) 00:19:58.419 fused_ordering(861) 00:19:58.419 fused_ordering(862) 00:19:58.419 fused_ordering(863) 00:19:58.419 fused_ordering(864) 00:19:58.419 fused_ordering(865) 00:19:58.419 fused_ordering(866) 00:19:58.419 fused_ordering(867) 00:19:58.419 fused_ordering(868) 00:19:58.419 fused_ordering(869) 00:19:58.419 fused_ordering(870) 00:19:58.419 fused_ordering(871) 00:19:58.419 fused_ordering(872) 00:19:58.419 fused_ordering(873) 00:19:58.419 fused_ordering(874) 00:19:58.419 fused_ordering(875) 00:19:58.419 fused_ordering(876) 00:19:58.419 fused_ordering(877) 00:19:58.419 fused_ordering(878) 00:19:58.419 fused_ordering(879) 00:19:58.419 fused_ordering(880) 00:19:58.419 fused_ordering(881) 00:19:58.419 fused_ordering(882) 00:19:58.419 fused_ordering(883) 00:19:58.419 fused_ordering(884) 00:19:58.419 fused_ordering(885) 00:19:58.419 fused_ordering(886) 00:19:58.419 fused_ordering(887) 00:19:58.419 fused_ordering(888) 00:19:58.419 fused_ordering(889) 00:19:58.419 fused_ordering(890) 00:19:58.419 fused_ordering(891) 00:19:58.419 fused_ordering(892) 00:19:58.419 fused_ordering(893) 00:19:58.419 fused_ordering(894) 00:19:58.419 fused_ordering(895) 00:19:58.419 fused_ordering(896) 00:19:58.419 fused_ordering(897) 00:19:58.419 fused_ordering(898) 00:19:58.419 fused_ordering(899) 00:19:58.419 fused_ordering(900) 00:19:58.419 fused_ordering(901) 00:19:58.419 fused_ordering(902) 00:19:58.419 fused_ordering(903) 00:19:58.419 fused_ordering(904) 00:19:58.419 fused_ordering(905) 00:19:58.419 fused_ordering(906) 00:19:58.419 fused_ordering(907) 00:19:58.419 fused_ordering(908) 00:19:58.419 fused_ordering(909) 00:19:58.419 fused_ordering(910) 00:19:58.419 fused_ordering(911) 00:19:58.419 fused_ordering(912) 00:19:58.419 fused_ordering(913) 00:19:58.419 fused_ordering(914) 00:19:58.419 fused_ordering(915) 00:19:58.419 fused_ordering(916) 00:19:58.419 fused_ordering(917) 00:19:58.419 fused_ordering(918) 00:19:58.419 fused_ordering(919) 00:19:58.419 fused_ordering(920) 00:19:58.419 fused_ordering(921) 00:19:58.419 fused_ordering(922) 00:19:58.419 fused_ordering(923) 00:19:58.419 fused_ordering(924) 00:19:58.419 fused_ordering(925) 00:19:58.419 fused_ordering(926) 00:19:58.419 fused_ordering(927) 00:19:58.419 fused_ordering(928) 00:19:58.419 fused_ordering(929) 00:19:58.419 fused_ordering(930) 00:19:58.419 fused_ordering(931) 00:19:58.419 fused_ordering(932) 00:19:58.419 fused_ordering(933) 00:19:58.419 fused_ordering(934) 00:19:58.419 fused_ordering(935) 00:19:58.419 fused_ordering(936) 00:19:58.419 fused_ordering(937) 00:19:58.419 fused_ordering(938) 00:19:58.419 fused_ordering(939) 00:19:58.419 fused_ordering(940) 00:19:58.419 fused_ordering(941) 00:19:58.419 fused_ordering(942) 00:19:58.419 fused_ordering(943) 00:19:58.419 fused_ordering(944) 00:19:58.419 fused_ordering(945) 00:19:58.419 fused_ordering(946) 00:19:58.419 fused_ordering(947) 00:19:58.419 fused_ordering(948) 00:19:58.419 fused_ordering(949) 00:19:58.419 fused_ordering(950) 00:19:58.419 fused_ordering(951) 00:19:58.419 fused_ordering(952) 00:19:58.419 fused_ordering(953) 00:19:58.419 fused_ordering(954) 00:19:58.419 fused_ordering(955) 00:19:58.419 fused_ordering(956) 00:19:58.419 fused_ordering(957) 00:19:58.419 fused_ordering(958) 00:19:58.419 fused_ordering(959) 00:19:58.419 fused_ordering(960) 00:19:58.419 fused_ordering(961) 00:19:58.419 fused_ordering(962) 00:19:58.419 fused_ordering(963) 00:19:58.419 fused_ordering(964) 00:19:58.419 fused_ordering(965) 00:19:58.419 fused_ordering(966) 00:19:58.419 fused_ordering(967) 00:19:58.419 fused_ordering(968) 00:19:58.419 fused_ordering(969) 00:19:58.419 fused_ordering(970) 00:19:58.419 fused_ordering(971) 00:19:58.419 fused_ordering(972) 00:19:58.419 fused_ordering(973) 00:19:58.419 fused_ordering(974) 00:19:58.419 fused_ordering(975) 00:19:58.419 fused_ordering(976) 00:19:58.419 fused_ordering(977) 00:19:58.419 fused_ordering(978) 00:19:58.419 fused_ordering(979) 00:19:58.419 fused_ordering(980) 00:19:58.419 fused_ordering(981) 00:19:58.420 fused_ordering(982) 00:19:58.420 fused_ordering(983) 00:19:58.420 fused_ordering(984) 00:19:58.420 fused_ordering(985) 00:19:58.420 fused_ordering(986) 00:19:58.420 fused_ordering(987) 00:19:58.420 fused_ordering(988) 00:19:58.420 fused_ordering(989) 00:19:58.420 fused_ordering(990) 00:19:58.420 fused_ordering(991) 00:19:58.420 fused_ordering(992) 00:19:58.420 fused_ordering(993) 00:19:58.420 fused_ordering(994) 00:19:58.420 fused_ordering(995) 00:19:58.420 fused_ordering(996) 00:19:58.420 fused_ordering(997) 00:19:58.420 fused_ordering(998) 00:19:58.420 fused_ordering(999) 00:19:58.420 fused_ordering(1000) 00:19:58.420 fused_ordering(1001) 00:19:58.420 fused_ordering(1002) 00:19:58.420 fused_ordering(1003) 00:19:58.420 fused_ordering(1004) 00:19:58.420 fused_ordering(1005) 00:19:58.420 fused_ordering(1006) 00:19:58.420 fused_ordering(1007) 00:19:58.420 fused_ordering(1008) 00:19:58.420 fused_ordering(1009) 00:19:58.420 fused_ordering(1010) 00:19:58.420 fused_ordering(1011) 00:19:58.420 fused_ordering(1012) 00:19:58.420 fused_ordering(1013) 00:19:58.420 fused_ordering(1014) 00:19:58.420 fused_ordering(1015) 00:19:58.420 fused_ordering(1016) 00:19:58.420 fused_ordering(1017) 00:19:58.420 fused_ordering(1018) 00:19:58.420 fused_ordering(1019) 00:19:58.420 fused_ordering(1020) 00:19:58.420 fused_ordering(1021) 00:19:58.420 fused_ordering(1022) 00:19:58.420 fused_ordering(1023) 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:58.420 rmmod nvme_tcp 00:19:58.420 rmmod nvme_fabrics 00:19:58.420 rmmod nvme_keyring 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 3110788 ']' 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 3110788 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3110788 ']' 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3110788 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3110788 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3110788' 00:19:58.420 killing process with pid 3110788 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3110788 00:19:58.420 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3110788 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@254 -- # local dev 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:58.682 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # return 0 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@274 -- # iptr 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-save 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-restore 00:20:00.597 00:20:00.597 real 0m13.178s 00:20:00.597 user 0m6.948s 00:20:00.597 sys 0m6.818s 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:00.597 ************************************ 00:20:00.597 END TEST nvmf_fused_ordering 00:20:00.597 ************************************ 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:00.597 ************************************ 00:20:00.597 START TEST nvmf_ns_masking 00:20:00.597 ************************************ 00:20:00.597 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:00.859 * Looking for test storage... 00:20:00.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.859 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:00.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.859 --rc genhtml_branch_coverage=1 00:20:00.859 --rc genhtml_function_coverage=1 00:20:00.860 --rc genhtml_legend=1 00:20:00.860 --rc geninfo_all_blocks=1 00:20:00.860 --rc geninfo_unexecuted_blocks=1 00:20:00.860 00:20:00.860 ' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:00.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.860 --rc genhtml_branch_coverage=1 00:20:00.860 --rc genhtml_function_coverage=1 00:20:00.860 --rc genhtml_legend=1 00:20:00.860 --rc geninfo_all_blocks=1 00:20:00.860 --rc geninfo_unexecuted_blocks=1 00:20:00.860 00:20:00.860 ' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:00.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.860 --rc genhtml_branch_coverage=1 00:20:00.860 --rc genhtml_function_coverage=1 00:20:00.860 --rc genhtml_legend=1 00:20:00.860 --rc geninfo_all_blocks=1 00:20:00.860 --rc geninfo_unexecuted_blocks=1 00:20:00.860 00:20:00.860 ' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:00.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.860 --rc genhtml_branch_coverage=1 00:20:00.860 --rc genhtml_function_coverage=1 00:20:00.860 --rc genhtml_legend=1 00:20:00.860 --rc geninfo_all_blocks=1 00:20:00.860 --rc geninfo_unexecuted_blocks=1 00:20:00.860 00:20:00.860 ' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:00.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7648e91e-3f71-4ea7-8b47-6ab7a570f1f8 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=18add61d-552d-4b70-9a4a-79929fef681a 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6cbe5872-320a-496c-b44a-4d41bd950f47 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:00.860 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:00.861 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:00.861 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:00.861 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:20:00.861 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:09.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:09.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:09.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:09.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@247 -- # create_target_ns 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:09.010 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:09.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:09.011 10.0.0.1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:09.011 10.0.0.2 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:09.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.583 ms 00:20:09.011 00:20:09.011 --- 10.0.0.1 ping statistics --- 00:20:09.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.011 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.011 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:09.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:20:09.012 00:20:09.012 --- 10.0.0.2 ping statistics --- 00:20:09.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.012 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:20:09.012 ' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=3115754 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 3115754 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3115754 ']' 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.012 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.013 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.013 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.013 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:09.013 [2024-11-05 16:44:15.441614] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:20:09.013 [2024-11-05 16:44:15.441668] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.013 [2024-11-05 16:44:15.519364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.013 [2024-11-05 16:44:15.553156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.013 [2024-11-05 16:44:15.553190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.013 [2024-11-05 16:44:15.553197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.013 [2024-11-05 16:44:15.553204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.013 [2024-11-05 16:44:15.553210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.013 [2024-11-05 16:44:15.553795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.275 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:09.536 [2024-11-05 16:44:16.426103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.537 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:09.537 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:09.537 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:09.798 Malloc1 00:20:09.798 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:09.798 Malloc2 00:20:09.798 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:10.059 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:10.320 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.320 [2024-11-05 16:44:17.363590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cbe5872-320a-496c-b44a-4d41bd950f47 -a 10.0.0.2 -s 4420 -i 4 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:10.580 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:13.122 [ 0]:0x1 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa5cda2c82c848ec83812ef58b4c1dc0 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa5cda2c82c848ec83812ef58b4c1dc0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:13.122 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:13.122 [ 0]:0x1 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa5cda2c82c848ec83812ef58b4c1dc0 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa5cda2c82c848ec83812ef58b4c1dc0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:13.122 [ 1]:0x2 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:13.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.122 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:13.384 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cbe5872-320a-496c-b44a-4d41bd950f47 -a 10.0.0.2 -s 4420 -i 4 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:20:13.644 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.187 [ 0]:0x2 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.187 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.187 [ 0]:0x1 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa5cda2c82c848ec83812ef58b4c1dc0 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa5cda2c82c848ec83812ef58b4c1dc0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.187 [ 1]:0x2 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.187 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.448 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.449 [ 0]:0x2 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:16.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.449 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:16.709 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:16.709 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cbe5872-320a-496c-b44a-4d41bd950f47 -a 10.0.0.2 -s 4420 -i 4 00:20:16.969 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:16.969 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:20:16.969 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:16.969 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:20:16.969 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:20:16.969 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:18.881 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:19.142 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:19.142 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:19.142 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:19.142 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:19.142 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:19.142 [ 0]:0x1 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa5cda2c82c848ec83812ef58b4c1dc0 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa5cda2c82c848ec83812ef58b4c1dc0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:19.142 [ 1]:0x2 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:19.142 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:19.402 [ 0]:0x2 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:19.402 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:19.664 [2024-11-05 16:44:26.646776] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:19.664 request: 00:20:19.664 { 00:20:19.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.664 "nsid": 2, 00:20:19.664 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.664 "method": "nvmf_ns_remove_host", 00:20:19.664 "req_id": 1 00:20:19.664 } 00:20:19.664 Got JSON-RPC error response 00:20:19.664 response: 00:20:19.664 { 00:20:19.664 "code": -32602, 00:20:19.664 "message": "Invalid parameters" 00:20:19.664 } 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.664 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:19.924 [ 0]:0x2 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=853636785287432a97c91c768f57cbda 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 853636785287432a97c91c768f57cbda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:19.924 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:19.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3118202 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3118202 /var/tmp/host.sock 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3118202 ']' 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:19.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:19.925 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 [2024-11-05 16:44:26.894571] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:20:19.925 [2024-11-05 16:44:26.894621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118202 ] 00:20:19.925 [2024-11-05 16:44:26.981688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.184 [2024-11-05 16:44:27.017452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.755 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.755 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:20:20.755 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.016 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:21.016 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7648e91e-3f71-4ea7-8b47-6ab7a570f1f8 00:20:21.016 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:20:21.016 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7648E91E3F714EA78B476AB7A570F1F8 -i 00:20:21.277 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 18add61d-552d-4b70-9a4a-79929fef681a 00:20:21.277 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:20:21.277 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 18ADD61D552D4B709A4A79929FEF681A -i 00:20:21.537 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:21.537 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:20:21.798 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:21.798 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:22.058 nvme0n1 00:20:22.058 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:22.058 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:22.318 nvme1n2 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:20:22.318 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:20:22.319 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:20:22.592 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7648e91e-3f71-4ea7-8b47-6ab7a570f1f8 == \7\6\4\8\e\9\1\e\-\3\f\7\1\-\4\e\a\7\-\8\b\4\7\-\6\a\b\7\a\5\7\0\f\1\f\8 ]] 00:20:22.592 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:20:22.592 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:20:22.592 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:20:22.852 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 18add61d-552d-4b70-9a4a-79929fef681a == \1\8\a\d\d\6\1\d\-\5\5\2\d\-\4\b\7\0\-\9\a\4\a\-\7\9\9\2\9\f\e\f\6\8\1\a ]] 00:20:22.852 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.852 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7648e91e-3f71-4ea7-8b47-6ab7a570f1f8 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7648E91E3F714EA78B476AB7A570F1F8 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7648E91E3F714EA78B476AB7A570F1F8 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:23.113 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7648E91E3F714EA78B476AB7A570F1F8 00:20:23.373 [2024-11-05 16:44:30.200673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:20:23.373 [2024-11-05 16:44:30.200709] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:20:23.373 [2024-11-05 16:44:30.200718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.373 request: 00:20:23.373 { 00:20:23.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.373 "namespace": { 00:20:23.373 "bdev_name": "invalid", 00:20:23.373 "nsid": 1, 00:20:23.373 "nguid": "7648E91E3F714EA78B476AB7A570F1F8", 00:20:23.373 "no_auto_visible": false 00:20:23.373 }, 00:20:23.373 "method": "nvmf_subsystem_add_ns", 00:20:23.373 "req_id": 1 00:20:23.373 } 00:20:23.373 Got JSON-RPC error response 00:20:23.373 response: 00:20:23.373 { 00:20:23.373 "code": -32602, 00:20:23.373 "message": "Invalid parameters" 00:20:23.373 } 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7648e91e-3f71-4ea7-8b47-6ab7a570f1f8 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7648E91E3F714EA78B476AB7A570F1F8 -i 00:20:23.373 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3118202 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3118202 ']' 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3118202 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3118202 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3118202' 00:20:25.916 killing process with pid 3118202 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3118202 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3118202 00:20:25.916 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:26.176 rmmod nvme_tcp 00:20:26.176 rmmod nvme_fabrics 00:20:26.176 rmmod nvme_keyring 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 3115754 ']' 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 3115754 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3115754 ']' 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3115754 00:20:26.176 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3115754 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3115754' 00:20:26.177 killing process with pid 3115754 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3115754 00:20:26.177 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3115754 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@254 -- # local dev 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:26.437 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # return 0 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@274 -- # iptr 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-save 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-restore 00:20:28.366 00:20:28.366 real 0m27.742s 00:20:28.366 user 0m31.246s 00:20:28.366 sys 0m8.034s 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:28.366 ************************************ 00:20:28.366 END TEST nvmf_ns_masking 00:20:28.366 ************************************ 00:20:28.366 16:44:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.627 ************************************ 00:20:28.627 START TEST nvmf_nvme_cli 00:20:28.627 ************************************ 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:28.627 * Looking for test storage... 00:20:28.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.627 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:28.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.628 --rc genhtml_branch_coverage=1 00:20:28.628 --rc genhtml_function_coverage=1 00:20:28.628 --rc genhtml_legend=1 00:20:28.628 --rc geninfo_all_blocks=1 00:20:28.628 --rc geninfo_unexecuted_blocks=1 00:20:28.628 00:20:28.628 ' 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:28.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.628 --rc genhtml_branch_coverage=1 00:20:28.628 --rc genhtml_function_coverage=1 00:20:28.628 --rc genhtml_legend=1 00:20:28.628 --rc geninfo_all_blocks=1 00:20:28.628 --rc geninfo_unexecuted_blocks=1 00:20:28.628 00:20:28.628 ' 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:28.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.628 --rc genhtml_branch_coverage=1 00:20:28.628 --rc genhtml_function_coverage=1 00:20:28.628 --rc genhtml_legend=1 00:20:28.628 --rc geninfo_all_blocks=1 00:20:28.628 --rc geninfo_unexecuted_blocks=1 00:20:28.628 00:20:28.628 ' 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:28.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.628 --rc genhtml_branch_coverage=1 00:20:28.628 --rc genhtml_function_coverage=1 00:20:28.628 --rc genhtml_legend=1 00:20:28.628 --rc geninfo_all_blocks=1 00:20:28.628 --rc geninfo_unexecuted_blocks=1 00:20:28.628 00:20:28.628 ' 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.628 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:28.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:20:28.889 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:37.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:37.122 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:37.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:37.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:37.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@247 -- # create_target_ns 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:37.123 10.0.0.1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:37.123 10.0.0.2 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:37.123 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:37.124 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:37.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.706 ms 00:20:37.124 00:20:37.124 --- 10.0.0.1 ping statistics --- 00:20:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.124 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:37.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:20:37.124 00:20:37.124 --- 10.0.0.2 ping statistics --- 00:20:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.124 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.124 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target1 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:20:37.125 ' 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=3123674 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 3123674 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3123674 ']' 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.125 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 [2024-11-05 16:44:43.213997] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:20:37.125 [2024-11-05 16:44:43.214063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.125 [2024-11-05 16:44:43.299297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.125 [2024-11-05 16:44:43.341847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.125 [2024-11-05 16:44:43.341888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.125 [2024-11-05 16:44:43.341896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.125 [2024-11-05 16:44:43.341902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.125 [2024-11-05 16:44:43.341908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.125 [2024-11-05 16:44:43.343508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.125 [2024-11-05 16:44:43.343624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.125 [2024-11-05 16:44:43.343816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.125 [2024-11-05 16:44:43.343816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 [2024-11-05 16:44:44.070435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 Malloc0 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 Malloc1 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.125 [2024-11-05 16:44:44.168572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.125 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:20:37.387 00:20:37.387 Discovery Log Number of Records 2, Generation counter 2 00:20:37.387 =====Discovery Log Entry 0====== 00:20:37.387 trtype: tcp 00:20:37.387 adrfam: ipv4 00:20:37.387 subtype: current discovery subsystem 00:20:37.387 treq: not required 00:20:37.387 portid: 0 00:20:37.387 trsvcid: 4420 00:20:37.387 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:37.387 traddr: 10.0.0.2 00:20:37.387 eflags: explicit discovery connections, duplicate discovery information 00:20:37.387 sectype: none 00:20:37.387 =====Discovery Log Entry 1====== 00:20:37.387 trtype: tcp 00:20:37.387 adrfam: ipv4 00:20:37.387 subtype: nvme subsystem 00:20:37.387 treq: not required 00:20:37.387 portid: 0 00:20:37.387 trsvcid: 4420 00:20:37.387 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:37.387 traddr: 10.0.0.2 00:20:37.387 eflags: none 00:20:37.387 sectype: none 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:37.387 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:39.300 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:39.300 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:20:39.300 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:39.300 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:20:39.300 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:20:39.300 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:20:41.215 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:41.215 /dev/nvme0n2 ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:41.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:41.215 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:41.216 rmmod nvme_tcp 00:20:41.216 rmmod nvme_fabrics 00:20:41.216 rmmod nvme_keyring 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 3123674 ']' 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 3123674 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3123674 ']' 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3123674 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.216 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3123674 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3123674' 00:20:41.477 killing process with pid 3123674 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3123674 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3123674 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@254 -- # local dev 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:41.477 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # return 0 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:20:44.026 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@274 -- # iptr 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-save 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-restore 00:20:44.027 00:20:44.027 real 0m15.073s 00:20:44.027 user 0m22.722s 00:20:44.027 sys 0m6.186s 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:44.027 ************************************ 00:20:44.027 END TEST nvmf_nvme_cli 00:20:44.027 ************************************ 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.027 ************************************ 00:20:44.027 START TEST nvmf_vfio_user 00:20:44.027 ************************************ 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:44.027 * Looking for test storage... 00:20:44.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:44.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.027 --rc genhtml_branch_coverage=1 00:20:44.027 --rc genhtml_function_coverage=1 00:20:44.027 --rc genhtml_legend=1 00:20:44.027 --rc geninfo_all_blocks=1 00:20:44.027 --rc geninfo_unexecuted_blocks=1 00:20:44.027 00:20:44.027 ' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:44.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.027 --rc genhtml_branch_coverage=1 00:20:44.027 --rc genhtml_function_coverage=1 00:20:44.027 --rc genhtml_legend=1 00:20:44.027 --rc geninfo_all_blocks=1 00:20:44.027 --rc geninfo_unexecuted_blocks=1 00:20:44.027 00:20:44.027 ' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:44.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.027 --rc genhtml_branch_coverage=1 00:20:44.027 --rc genhtml_function_coverage=1 00:20:44.027 --rc genhtml_legend=1 00:20:44.027 --rc geninfo_all_blocks=1 00:20:44.027 --rc geninfo_unexecuted_blocks=1 00:20:44.027 00:20:44.027 ' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:44.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.027 --rc genhtml_branch_coverage=1 00:20:44.027 --rc genhtml_function_coverage=1 00:20:44.027 --rc genhtml_legend=1 00:20:44.027 --rc geninfo_all_blocks=1 00:20:44.027 --rc geninfo_unexecuted_blocks=1 00:20:44.027 00:20:44.027 ' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.027 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:44.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3125326 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3125326' 00:20:44.028 Process pid: 3125326 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3125326 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3125326 ']' 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.028 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:44.028 [2024-11-05 16:44:50.941770] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:20:44.028 [2024-11-05 16:44:50.941846] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.028 [2024-11-05 16:44:51.020948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.028 [2024-11-05 16:44:51.064128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.028 [2024-11-05 16:44:51.064168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.028 [2024-11-05 16:44:51.064177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.028 [2024-11-05 16:44:51.064184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.028 [2024-11-05 16:44:51.064190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.028 [2024-11-05 16:44:51.065783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.028 [2024-11-05 16:44:51.066012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.028 [2024-11-05 16:44:51.065867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.028 [2024-11-05 16:44:51.066012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.971 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:44.971 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:20:44.971 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:45.914 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:20:45.914 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:45.914 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:45.914 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:45.914 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:45.914 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:46.175 Malloc1 00:20:46.175 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:46.436 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:46.698 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:46.698 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:46.698 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:46.698 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:46.960 Malloc2 00:20:46.960 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:47.221 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:47.221 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:47.483 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:47.483 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:47.483 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:47.483 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:47.483 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:47.483 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:47.483 [2024-11-05 16:44:54.470484] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:20:47.483 [2024-11-05 16:44:54.470530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126053 ] 00:20:47.483 [2024-11-05 16:44:54.524887] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:47.483 [2024-11-05 16:44:54.527158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:47.483 [2024-11-05 16:44:54.527179] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faae3f8e000 00:20:47.483 [2024-11-05 16:44:54.528161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.529159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.530160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.531171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.532181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.535752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.536198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.537198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:47.483 [2024-11-05 16:44:54.538212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:47.483 [2024-11-05 16:44:54.538226] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faae3f83000 00:20:47.483 [2024-11-05 16:44:54.539553] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:47.746 [2024-11-05 16:44:54.556467] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:47.746 [2024-11-05 16:44:54.556494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:20:47.746 [2024-11-05 16:44:54.561340] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:47.746 [2024-11-05 16:44:54.561385] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:47.746 [2024-11-05 16:44:54.561471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:20:47.746 [2024-11-05 16:44:54.561490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:20:47.746 [2024-11-05 16:44:54.561496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:20:47.746 [2024-11-05 16:44:54.562342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:47.746 [2024-11-05 16:44:54.562352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:20:47.746 [2024-11-05 16:44:54.562359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:20:47.746 [2024-11-05 16:44:54.563348] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:47.746 [2024-11-05 16:44:54.563357] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:20:47.746 [2024-11-05 16:44:54.563364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:47.746 [2024-11-05 16:44:54.564350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:47.746 [2024-11-05 16:44:54.564359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:47.746 [2024-11-05 16:44:54.565357] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:47.746 [2024-11-05 16:44:54.565366] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:47.746 [2024-11-05 16:44:54.565371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:47.746 [2024-11-05 16:44:54.565378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:47.746 [2024-11-05 16:44:54.565489] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:20:47.746 [2024-11-05 16:44:54.565494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:47.746 [2024-11-05 16:44:54.565499] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:20:47.746 [2024-11-05 16:44:54.566374] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:20:47.746 [2024-11-05 16:44:54.567376] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:47.746 [2024-11-05 16:44:54.568383] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:47.746 [2024-11-05 16:44:54.569385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:47.746 [2024-11-05 16:44:54.569448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:47.746 [2024-11-05 16:44:54.570398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:47.746 [2024-11-05 16:44:54.570407] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:47.746 [2024-11-05 16:44:54.570412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:47.746 [2024-11-05 16:44:54.570433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:20:47.746 [2024-11-05 16:44:54.570445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:47.746 [2024-11-05 16:44:54.570460] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:47.746 [2024-11-05 16:44:54.570465] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:47.746 [2024-11-05 16:44:54.570469] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.746 [2024-11-05 16:44:54.570483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:47.746 [2024-11-05 16:44:54.570518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:47.746 [2024-11-05 16:44:54.570528] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:20:47.746 [2024-11-05 16:44:54.570533] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:20:47.746 [2024-11-05 16:44:54.570537] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:20:47.746 [2024-11-05 16:44:54.570542] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:47.746 [2024-11-05 16:44:54.570547] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:20:47.746 [2024-11-05 16:44:54.570553] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:20:47.746 [2024-11-05 16:44:54.570558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:20:47.746 [2024-11-05 16:44:54.570566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:47.746 [2024-11-05 16:44:54.570579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:47.746 [2024-11-05 16:44:54.570589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:47.746 [2024-11-05 16:44:54.570602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.746 [2024-11-05 16:44:54.570611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.746 [2024-11-05 16:44:54.570619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.746 [2024-11-05 16:44:54.570627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.746 [2024-11-05 16:44:54.570632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:47.746 [2024-11-05 16:44:54.570640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:47.746 [2024-11-05 16:44:54.570649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:47.746 [2024-11-05 16:44:54.570656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:47.746 [2024-11-05 16:44:54.570663] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:20:47.746 [2024-11-05 16:44:54.570669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.570698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.570767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570784] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:47.747 [2024-11-05 16:44:54.570788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:47.747 [2024-11-05 16:44:54.570791] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.747 [2024-11-05 16:44:54.570798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.570817] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:20:47.747 [2024-11-05 16:44:54.570825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:47.747 [2024-11-05 16:44:54.570846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:47.747 [2024-11-05 16:44:54.570850] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.747 [2024-11-05 16:44:54.570856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.570874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.570887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570902] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:47.747 [2024-11-05 16:44:54.570906] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:47.747 [2024-11-05 16:44:54.570909] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.747 [2024-11-05 16:44:54.570915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.570927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.570935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570972] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:47.747 [2024-11-05 16:44:54.570977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:20:47.747 [2024-11-05 16:44:54.570983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:20:47.747 [2024-11-05 16:44:54.571001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571090] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:47.747 [2024-11-05 16:44:54.571095] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:47.747 [2024-11-05 16:44:54.571098] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:47.747 [2024-11-05 16:44:54.571102] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:47.747 [2024-11-05 16:44:54.571105] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:47.747 [2024-11-05 16:44:54.571111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:47.747 [2024-11-05 16:44:54.571119] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:47.747 [2024-11-05 16:44:54.571123] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:47.747 [2024-11-05 16:44:54.571127] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.747 [2024-11-05 16:44:54.571133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571141] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:47.747 [2024-11-05 16:44:54.571145] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:47.747 [2024-11-05 16:44:54.571148] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.747 [2024-11-05 16:44:54.571154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571164] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:47.747 [2024-11-05 16:44:54.571169] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:47.747 [2024-11-05 16:44:54.571172] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:47.747 [2024-11-05 16:44:54.571178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:47.747 [2024-11-05 16:44:54.571185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:47.747 [2024-11-05 16:44:54.571214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:47.747 ===================================================== 00:20:47.747 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:47.747 ===================================================== 00:20:47.747 Controller Capabilities/Features 00:20:47.747 ================================ 00:20:47.747 Vendor ID: 4e58 00:20:47.747 Subsystem Vendor ID: 4e58 00:20:47.747 Serial Number: SPDK1 00:20:47.747 Model Number: SPDK bdev Controller 00:20:47.747 Firmware Version: 25.01 00:20:47.747 Recommended Arb Burst: 6 00:20:47.747 IEEE OUI Identifier: 8d 6b 50 00:20:47.747 Multi-path I/O 00:20:47.747 May have multiple subsystem ports: Yes 00:20:47.747 May have multiple controllers: Yes 00:20:47.747 Associated with SR-IOV VF: No 00:20:47.747 Max Data Transfer Size: 131072 00:20:47.747 Max Number of Namespaces: 32 00:20:47.747 Max Number of I/O Queues: 127 00:20:47.747 NVMe Specification Version (VS): 1.3 00:20:47.747 NVMe Specification Version (Identify): 1.3 00:20:47.747 Maximum Queue Entries: 256 00:20:47.747 Contiguous Queues Required: Yes 00:20:47.747 Arbitration Mechanisms Supported 00:20:47.747 Weighted Round Robin: Not Supported 00:20:47.747 Vendor Specific: Not Supported 00:20:47.747 Reset Timeout: 15000 ms 00:20:47.747 Doorbell Stride: 4 bytes 00:20:47.747 NVM Subsystem Reset: Not Supported 00:20:47.747 Command Sets Supported 00:20:47.747 NVM Command Set: Supported 00:20:47.747 Boot Partition: Not Supported 00:20:47.747 Memory Page Size Minimum: 4096 bytes 00:20:47.747 Memory Page Size Maximum: 4096 bytes 00:20:47.747 Persistent Memory Region: Not Supported 00:20:47.747 Optional Asynchronous Events Supported 00:20:47.747 Namespace Attribute Notices: Supported 00:20:47.747 Firmware Activation Notices: Not Supported 00:20:47.747 ANA Change Notices: Not Supported 00:20:47.747 PLE Aggregate Log Change Notices: Not Supported 00:20:47.747 LBA Status Info Alert Notices: Not Supported 00:20:47.747 EGE Aggregate Log Change Notices: Not Supported 00:20:47.747 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.748 Zone Descriptor Change Notices: Not Supported 00:20:47.748 Discovery Log Change Notices: Not Supported 00:20:47.748 Controller Attributes 00:20:47.748 128-bit Host Identifier: Supported 00:20:47.748 Non-Operational Permissive Mode: Not Supported 00:20:47.748 NVM Sets: Not Supported 00:20:47.748 Read Recovery Levels: Not Supported 00:20:47.748 Endurance Groups: Not Supported 00:20:47.748 Predictable Latency Mode: Not Supported 00:20:47.748 Traffic Based Keep ALive: Not Supported 00:20:47.748 Namespace Granularity: Not Supported 00:20:47.748 SQ Associations: Not Supported 00:20:47.748 UUID List: Not Supported 00:20:47.748 Multi-Domain Subsystem: Not Supported 00:20:47.748 Fixed Capacity Management: Not Supported 00:20:47.748 Variable Capacity Management: Not Supported 00:20:47.748 Delete Endurance Group: Not Supported 00:20:47.748 Delete NVM Set: Not Supported 00:20:47.748 Extended LBA Formats Supported: Not Supported 00:20:47.748 Flexible Data Placement Supported: Not Supported 00:20:47.748 00:20:47.748 Controller Memory Buffer Support 00:20:47.748 ================================ 00:20:47.748 Supported: No 00:20:47.748 00:20:47.748 Persistent Memory Region Support 00:20:47.748 ================================ 00:20:47.748 Supported: No 00:20:47.748 00:20:47.748 Admin Command Set Attributes 00:20:47.748 ============================ 00:20:47.748 Security Send/Receive: Not Supported 00:20:47.748 Format NVM: Not Supported 00:20:47.748 Firmware Activate/Download: Not Supported 00:20:47.748 Namespace Management: Not Supported 00:20:47.748 Device Self-Test: Not Supported 00:20:47.748 Directives: Not Supported 00:20:47.748 NVMe-MI: Not Supported 00:20:47.748 Virtualization Management: Not Supported 00:20:47.748 Doorbell Buffer Config: Not Supported 00:20:47.748 Get LBA Status Capability: Not Supported 00:20:47.748 Command & Feature Lockdown Capability: Not Supported 00:20:47.748 Abort Command Limit: 4 00:20:47.748 Async Event Request Limit: 4 00:20:47.748 Number of Firmware Slots: N/A 00:20:47.748 Firmware Slot 1 Read-Only: N/A 00:20:47.748 Firmware Activation Without Reset: N/A 00:20:47.748 Multiple Update Detection Support: N/A 00:20:47.748 Firmware Update Granularity: No Information Provided 00:20:47.748 Per-Namespace SMART Log: No 00:20:47.748 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.748 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:47.748 Command Effects Log Page: Supported 00:20:47.748 Get Log Page Extended Data: Supported 00:20:47.748 Telemetry Log Pages: Not Supported 00:20:47.748 Persistent Event Log Pages: Not Supported 00:20:47.748 Supported Log Pages Log Page: May Support 00:20:47.748 Commands Supported & Effects Log Page: Not Supported 00:20:47.748 Feature Identifiers & Effects Log Page:May Support 00:20:47.748 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.748 Data Area 4 for Telemetry Log: Not Supported 00:20:47.748 Error Log Page Entries Supported: 128 00:20:47.748 Keep Alive: Supported 00:20:47.748 Keep Alive Granularity: 10000 ms 00:20:47.748 00:20:47.748 NVM Command Set Attributes 00:20:47.748 ========================== 00:20:47.748 Submission Queue Entry Size 00:20:47.748 Max: 64 00:20:47.748 Min: 64 00:20:47.748 Completion Queue Entry Size 00:20:47.748 Max: 16 00:20:47.748 Min: 16 00:20:47.748 Number of Namespaces: 32 00:20:47.748 Compare Command: Supported 00:20:47.748 Write Uncorrectable Command: Not Supported 00:20:47.748 Dataset Management Command: Supported 00:20:47.748 Write Zeroes Command: Supported 00:20:47.748 Set Features Save Field: Not Supported 00:20:47.748 Reservations: Not Supported 00:20:47.748 Timestamp: Not Supported 00:20:47.748 Copy: Supported 00:20:47.748 Volatile Write Cache: Present 00:20:47.748 Atomic Write Unit (Normal): 1 00:20:47.748 Atomic Write Unit (PFail): 1 00:20:47.748 Atomic Compare & Write Unit: 1 00:20:47.748 Fused Compare & Write: Supported 00:20:47.748 Scatter-Gather List 00:20:47.748 SGL Command Set: Supported (Dword aligned) 00:20:47.748 SGL Keyed: Not Supported 00:20:47.748 SGL Bit Bucket Descriptor: Not Supported 00:20:47.748 SGL Metadata Pointer: Not Supported 00:20:47.748 Oversized SGL: Not Supported 00:20:47.748 SGL Metadata Address: Not Supported 00:20:47.748 SGL Offset: Not Supported 00:20:47.748 Transport SGL Data Block: Not Supported 00:20:47.748 Replay Protected Memory Block: Not Supported 00:20:47.748 00:20:47.748 Firmware Slot Information 00:20:47.748 ========================= 00:20:47.748 Active slot: 1 00:20:47.748 Slot 1 Firmware Revision: 25.01 00:20:47.748 00:20:47.748 00:20:47.748 Commands Supported and Effects 00:20:47.748 ============================== 00:20:47.748 Admin Commands 00:20:47.748 -------------- 00:20:47.748 Get Log Page (02h): Supported 00:20:47.748 Identify (06h): Supported 00:20:47.748 Abort (08h): Supported 00:20:47.748 Set Features (09h): Supported 00:20:47.748 Get Features (0Ah): Supported 00:20:47.748 Asynchronous Event Request (0Ch): Supported 00:20:47.748 Keep Alive (18h): Supported 00:20:47.748 I/O Commands 00:20:47.748 ------------ 00:20:47.748 Flush (00h): Supported LBA-Change 00:20:47.748 Write (01h): Supported LBA-Change 00:20:47.748 Read (02h): Supported 00:20:47.748 Compare (05h): Supported 00:20:47.748 Write Zeroes (08h): Supported LBA-Change 00:20:47.748 Dataset Management (09h): Supported LBA-Change 00:20:47.748 Copy (19h): Supported LBA-Change 00:20:47.748 00:20:47.748 Error Log 00:20:47.748 ========= 00:20:47.748 00:20:47.748 Arbitration 00:20:47.748 =========== 00:20:47.748 Arbitration Burst: 1 00:20:47.748 00:20:47.748 Power Management 00:20:47.748 ================ 00:20:47.748 Number of Power States: 1 00:20:47.748 Current Power State: Power State #0 00:20:47.748 Power State #0: 00:20:47.748 Max Power: 0.00 W 00:20:47.748 Non-Operational State: Operational 00:20:47.748 Entry Latency: Not Reported 00:20:47.748 Exit Latency: Not Reported 00:20:47.748 Relative Read Throughput: 0 00:20:47.748 Relative Read Latency: 0 00:20:47.748 Relative Write Throughput: 0 00:20:47.748 Relative Write Latency: 0 00:20:47.748 Idle Power: Not Reported 00:20:47.748 Active Power: Not Reported 00:20:47.748 Non-Operational Permissive Mode: Not Supported 00:20:47.748 00:20:47.748 Health Information 00:20:47.748 ================== 00:20:47.748 Critical Warnings: 00:20:47.748 Available Spare Space: OK 00:20:47.748 Temperature: OK 00:20:47.748 Device Reliability: OK 00:20:47.748 Read Only: No 00:20:47.748 Volatile Memory Backup: OK 00:20:47.748 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:47.748 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:47.748 Available Spare: 0% 00:20:47.748 Available Sp[2024-11-05 16:44:54.571317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:47.748 [2024-11-05 16:44:54.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:47.748 [2024-11-05 16:44:54.571355] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:20:47.748 [2024-11-05 16:44:54.571365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.748 [2024-11-05 16:44:54.571372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.748 [2024-11-05 16:44:54.571378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.748 [2024-11-05 16:44:54.571384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.748 [2024-11-05 16:44:54.574756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:47.748 [2024-11-05 16:44:54.574768] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:47.748 [2024-11-05 16:44:54.575413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:47.748 [2024-11-05 16:44:54.575455] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:20:47.748 [2024-11-05 16:44:54.575461] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:20:47.748 [2024-11-05 16:44:54.576429] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:47.748 [2024-11-05 16:44:54.576440] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:20:47.748 [2024-11-05 16:44:54.576502] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:47.748 [2024-11-05 16:44:54.578459] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:47.748 are Threshold: 0% 00:20:47.748 Life Percentage Used: 0% 00:20:47.748 Data Units Read: 0 00:20:47.748 Data Units Written: 0 00:20:47.748 Host Read Commands: 0 00:20:47.748 Host Write Commands: 0 00:20:47.748 Controller Busy Time: 0 minutes 00:20:47.748 Power Cycles: 0 00:20:47.748 Power On Hours: 0 hours 00:20:47.748 Unsafe Shutdowns: 0 00:20:47.749 Unrecoverable Media Errors: 0 00:20:47.749 Lifetime Error Log Entries: 0 00:20:47.749 Warning Temperature Time: 0 minutes 00:20:47.749 Critical Temperature Time: 0 minutes 00:20:47.749 00:20:47.749 Number of Queues 00:20:47.749 ================ 00:20:47.749 Number of I/O Submission Queues: 127 00:20:47.749 Number of I/O Completion Queues: 127 00:20:47.749 00:20:47.749 Active Namespaces 00:20:47.749 ================= 00:20:47.749 Namespace ID:1 00:20:47.749 Error Recovery Timeout: Unlimited 00:20:47.749 Command Set Identifier: NVM (00h) 00:20:47.749 Deallocate: Supported 00:20:47.749 Deallocated/Unwritten Error: Not Supported 00:20:47.749 Deallocated Read Value: Unknown 00:20:47.749 Deallocate in Write Zeroes: Not Supported 00:20:47.749 Deallocated Guard Field: 0xFFFF 00:20:47.749 Flush: Supported 00:20:47.749 Reservation: Supported 00:20:47.749 Namespace Sharing Capabilities: Multiple Controllers 00:20:47.749 Size (in LBAs): 131072 (0GiB) 00:20:47.749 Capacity (in LBAs): 131072 (0GiB) 00:20:47.749 Utilization (in LBAs): 131072 (0GiB) 00:20:47.749 NGUID: 30E4AD52073F498FB3A7336DABC4C8BD 00:20:47.749 UUID: 30e4ad52-073f-498f-b3a7-336dabc4c8bd 00:20:47.749 Thin Provisioning: Not Supported 00:20:47.749 Per-NS Atomic Units: Yes 00:20:47.749 Atomic Boundary Size (Normal): 0 00:20:47.749 Atomic Boundary Size (PFail): 0 00:20:47.749 Atomic Boundary Offset: 0 00:20:47.749 Maximum Single Source Range Length: 65535 00:20:47.749 Maximum Copy Length: 65535 00:20:47.749 Maximum Source Range Count: 1 00:20:47.749 NGUID/EUI64 Never Reused: No 00:20:47.749 Namespace Write Protected: No 00:20:47.749 Number of LBA Formats: 1 00:20:47.749 Current LBA Format: LBA Format #00 00:20:47.749 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.749 00:20:47.749 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:47.749 [2024-11-05 16:44:54.783454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:53.040 Initializing NVMe Controllers 00:20:53.040 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:53.040 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:53.040 Initialization complete. Launching workers. 00:20:53.040 ======================================================== 00:20:53.040 Latency(us) 00:20:53.040 Device Information : IOPS MiB/s Average min max 00:20:53.040 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39969.71 156.13 3202.30 853.16 6913.48 00:20:53.040 ======================================================== 00:20:53.040 Total : 39969.71 156.13 3202.30 853.16 6913.48 00:20:53.040 00:20:53.040 [2024-11-05 16:44:59.802752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:53.040 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:53.040 [2024-11-05 16:44:59.993645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:58.331 Initializing NVMe Controllers 00:20:58.331 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:58.331 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:58.331 Initialization complete. Launching workers. 00:20:58.331 ======================================================== 00:20:58.331 Latency(us) 00:20:58.331 Device Information : IOPS MiB/s Average min max 00:20:58.331 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16056.60 62.72 7977.28 6656.63 8308.64 00:20:58.331 ======================================================== 00:20:58.331 Total : 16056.60 62.72 7977.28 6656.63 8308.64 00:20:58.331 00:20:58.331 [2024-11-05 16:45:05.033268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:58.331 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:58.331 [2024-11-05 16:45:05.234158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:03.646 [2024-11-05 16:45:10.317006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:03.646 Initializing NVMe Controllers 00:21:03.646 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:03.646 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:03.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:21:03.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:21:03.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:21:03.646 Initialization complete. Launching workers. 00:21:03.646 Starting thread on core 2 00:21:03.646 Starting thread on core 3 00:21:03.646 Starting thread on core 1 00:21:03.646 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:21:03.646 [2024-11-05 16:45:10.597503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:06.968 [2024-11-05 16:45:13.654536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:06.968 Initializing NVMe Controllers 00:21:06.968 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:06.968 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:06.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:21:06.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:21:06.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:21:06.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:21:06.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:06.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:06.968 Initialization complete. Launching workers. 00:21:06.968 Starting thread on core 1 with urgent priority queue 00:21:06.968 Starting thread on core 2 with urgent priority queue 00:21:06.968 Starting thread on core 3 with urgent priority queue 00:21:06.968 Starting thread on core 0 with urgent priority queue 00:21:06.968 SPDK bdev Controller (SPDK1 ) core 0: 8551.33 IO/s 11.69 secs/100000 ios 00:21:06.968 SPDK bdev Controller (SPDK1 ) core 1: 8138.33 IO/s 12.29 secs/100000 ios 00:21:06.968 SPDK bdev Controller (SPDK1 ) core 2: 8121.67 IO/s 12.31 secs/100000 ios 00:21:06.968 SPDK bdev Controller (SPDK1 ) core 3: 9265.33 IO/s 10.79 secs/100000 ios 00:21:06.968 ======================================================== 00:21:06.968 00:21:06.968 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:06.968 [2024-11-05 16:45:13.951186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:06.968 Initializing NVMe Controllers 00:21:06.968 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:06.968 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:06.968 Namespace ID: 1 size: 0GB 00:21:06.968 Initialization complete. 00:21:06.968 INFO: using host memory buffer for IO 00:21:06.968 Hello world! 00:21:06.968 [2024-11-05 16:45:13.987404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:07.230 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:07.230 [2024-11-05 16:45:14.274168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:08.616 Initializing NVMe Controllers 00:21:08.616 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:08.616 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:08.617 Initialization complete. Launching workers. 00:21:08.617 submit (in ns) avg, min, max = 8590.8, 3898.3, 4996758.3 00:21:08.617 complete (in ns) avg, min, max = 18219.8, 2395.8, 4000452.5 00:21:08.617 00:21:08.617 Submit histogram 00:21:08.617 ================ 00:21:08.617 Range in us Cumulative Count 00:21:08.617 3.893 - 3.920: 1.0095% ( 189) 00:21:08.617 3.920 - 3.947: 6.8529% ( 1094) 00:21:08.617 3.947 - 3.973: 16.9213% ( 1885) 00:21:08.617 3.973 - 4.000: 28.1914% ( 2110) 00:21:08.617 4.000 - 4.027: 38.8634% ( 1998) 00:21:08.617 4.027 - 4.053: 49.8237% ( 2052) 00:21:08.617 4.053 - 4.080: 67.0548% ( 3226) 00:21:08.617 4.080 - 4.107: 82.2455% ( 2844) 00:21:08.617 4.107 - 4.133: 92.4260% ( 1906) 00:21:08.617 4.133 - 4.160: 97.0730% ( 870) 00:21:08.617 4.160 - 4.187: 98.8676% ( 336) 00:21:08.617 4.187 - 4.213: 99.3323% ( 87) 00:21:08.617 4.213 - 4.240: 99.4392% ( 20) 00:21:08.617 4.240 - 4.267: 99.4605% ( 4) 00:21:08.617 4.267 - 4.293: 99.4659% ( 1) 00:21:08.617 4.347 - 4.373: 99.4712% ( 1) 00:21:08.617 4.400 - 4.427: 99.4766% ( 1) 00:21:08.617 4.907 - 4.933: 99.4819% ( 1) 00:21:08.617 4.933 - 4.960: 99.4872% ( 1) 00:21:08.617 4.960 - 4.987: 99.4926% ( 1) 00:21:08.617 5.013 - 5.040: 99.4979% ( 1) 00:21:08.617 5.040 - 5.067: 99.5033% ( 1) 00:21:08.617 5.227 - 5.253: 99.5086% ( 1) 00:21:08.617 5.627 - 5.653: 99.5139% ( 1) 00:21:08.617 5.653 - 5.680: 99.5193% ( 1) 00:21:08.617 5.760 - 5.787: 99.5246% ( 1) 00:21:08.617 5.973 - 6.000: 99.5300% ( 1) 00:21:08.617 6.000 - 6.027: 99.5353% ( 1) 00:21:08.617 6.027 - 6.053: 99.5406% ( 1) 00:21:08.617 6.080 - 6.107: 99.5460% ( 1) 00:21:08.617 6.107 - 6.133: 99.5513% ( 1) 00:21:08.617 6.160 - 6.187: 99.5620% ( 2) 00:21:08.617 6.187 - 6.213: 99.5674% ( 1) 00:21:08.617 6.213 - 6.240: 99.5727% ( 1) 00:21:08.617 6.373 - 6.400: 99.5834% ( 2) 00:21:08.617 6.400 - 6.427: 99.5887% ( 1) 00:21:08.617 6.427 - 6.453: 99.5994% ( 2) 00:21:08.617 6.507 - 6.533: 99.6047% ( 1) 00:21:08.617 6.560 - 6.587: 99.6101% ( 1) 00:21:08.617 6.613 - 6.640: 99.6154% ( 1) 00:21:08.617 6.640 - 6.667: 99.6208% ( 1) 00:21:08.617 6.667 - 6.693: 99.6314% ( 2) 00:21:08.617 6.693 - 6.720: 99.6368% ( 1) 00:21:08.617 6.747 - 6.773: 99.6421% ( 1) 00:21:08.617 6.827 - 6.880: 99.6582% ( 3) 00:21:08.617 6.880 - 6.933: 99.6688% ( 2) 00:21:08.617 6.933 - 6.987: 99.6902% ( 4) 00:21:08.617 6.987 - 7.040: 99.7009% ( 2) 00:21:08.617 7.093 - 7.147: 99.7223% ( 4) 00:21:08.617 7.253 - 7.307: 99.7383% ( 3) 00:21:08.617 7.307 - 7.360: 99.7543% ( 3) 00:21:08.617 7.360 - 7.413: 99.7703% ( 3) 00:21:08.617 7.413 - 7.467: 99.7757% ( 1) 00:21:08.617 7.467 - 7.520: 99.7863% ( 2) 00:21:08.617 7.520 - 7.573: 99.7917% ( 1) 00:21:08.617 7.573 - 7.627: 99.7970% ( 1) 00:21:08.617 7.733 - 7.787: 99.8184% ( 4) 00:21:08.617 7.840 - 7.893: 99.8237% ( 1) 00:21:08.617 7.893 - 7.947: 99.8344% ( 2) 00:21:08.617 7.947 - 8.000: 99.8398% ( 1) 00:21:08.617 8.000 - 8.053: 99.8451% ( 1) 00:21:08.617 8.107 - 8.160: 99.8558% ( 2) 00:21:08.617 8.373 - 8.427: 99.8611% ( 1) 00:21:08.617 8.427 - 8.480: 99.8665% ( 1) 00:21:08.617 9.333 - 9.387: 99.8718% ( 1) 00:21:08.617 9.813 - 9.867: 99.8771% ( 1) 00:21:08.617 13.333 - 13.387: 99.8825% ( 1) 00:21:08.617 14.613 - 14.720: 99.8878% ( 1) 00:21:08.617 3986.773 - 4014.080: 99.9947% ( 20) 00:21:08.617 4969.813 - 4997.120: 100.0000% ( 1) 00:21:08.617 00:21:08.617 Complete histogram 00:21:08.617 ================== 00:21:08.617 Range in us Cumulative Count 00:21:08.617 2.387 - 2.400: 0.0053% ( 1) 00:21:08.617 2.400 - 2.413: 0.7905% ( 147) 00:21:08.617 2.413 - 2.427: 1.3407% ( 103) 00:21:08.617 2.427 - 2.440: 1.4635% ( 23) 00:21:08.617 2.440 - 2.453: 1.5917% ( 24) 00:21:08.617 2.453 - 2.467: 43.7293% ( 7889) 00:21:08.617 2.467 - 2.480: 73.5872% ( 5590) 00:21:08.617 2.480 - [2024-11-05 16:45:15.296716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:08.617 2.493: 83.8853% ( 1928) 00:21:08.617 2.493 - 2.507: 90.6260% ( 1262) 00:21:08.617 2.507 - 2.520: 92.1857% ( 292) 00:21:08.617 2.520 - 2.533: 93.9376% ( 328) 00:21:08.617 2.533 - 2.547: 96.6083% ( 500) 00:21:08.617 2.547 - 2.560: 98.3282% ( 322) 00:21:08.617 2.560 - 2.573: 99.0706% ( 139) 00:21:08.617 2.573 - 2.587: 99.3804% ( 58) 00:21:08.617 2.587 - 2.600: 99.4498% ( 13) 00:21:08.617 2.600 - 2.613: 99.4659% ( 3) 00:21:08.617 4.507 - 4.533: 99.4712% ( 1) 00:21:08.617 4.640 - 4.667: 99.4819% ( 2) 00:21:08.617 4.720 - 4.747: 99.4872% ( 1) 00:21:08.617 4.960 - 4.987: 99.4926% ( 1) 00:21:08.617 4.987 - 5.013: 99.4979% ( 1) 00:21:08.617 5.067 - 5.093: 99.5033% ( 1) 00:21:08.617 5.093 - 5.120: 99.5086% ( 1) 00:21:08.617 5.120 - 5.147: 99.5139% ( 1) 00:21:08.617 5.147 - 5.173: 99.5193% ( 1) 00:21:08.617 5.200 - 5.227: 99.5246% ( 1) 00:21:08.617 5.227 - 5.253: 99.5300% ( 1) 00:21:08.617 5.307 - 5.333: 99.5353% ( 1) 00:21:08.617 5.333 - 5.360: 99.5406% ( 1) 00:21:08.617 5.387 - 5.413: 99.5513% ( 2) 00:21:08.617 5.493 - 5.520: 99.5567% ( 1) 00:21:08.617 5.573 - 5.600: 99.5620% ( 1) 00:21:08.617 5.653 - 5.680: 99.5674% ( 1) 00:21:08.617 5.787 - 5.813: 99.5727% ( 1) 00:21:08.617 5.813 - 5.840: 99.5780% ( 1) 00:21:08.617 5.867 - 5.893: 99.5834% ( 1) 00:21:08.617 6.347 - 6.373: 99.5887% ( 1) 00:21:08.617 6.880 - 6.933: 99.5941% ( 1) 00:21:08.617 7.200 - 7.253: 99.5994% ( 1) 00:21:08.617 12.480 - 12.533: 99.6047% ( 1) 00:21:08.617 3031.040 - 3044.693: 99.6101% ( 1) 00:21:08.617 3986.773 - 4014.080: 100.0000% ( 73) 00:21:08.617 00:21:08.617 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:21:08.617 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:21:08.617 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:21:08.617 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:21:08.617 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:08.617 [ 00:21:08.617 { 00:21:08.617 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:08.617 "subtype": "Discovery", 00:21:08.617 "listen_addresses": [], 00:21:08.617 "allow_any_host": true, 00:21:08.617 "hosts": [] 00:21:08.617 }, 00:21:08.617 { 00:21:08.617 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:08.617 "subtype": "NVMe", 00:21:08.617 "listen_addresses": [ 00:21:08.617 { 00:21:08.617 "trtype": "VFIOUSER", 00:21:08.617 "adrfam": "IPv4", 00:21:08.617 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:08.617 "trsvcid": "0" 00:21:08.617 } 00:21:08.617 ], 00:21:08.617 "allow_any_host": true, 00:21:08.617 "hosts": [], 00:21:08.617 "serial_number": "SPDK1", 00:21:08.617 "model_number": "SPDK bdev Controller", 00:21:08.617 "max_namespaces": 32, 00:21:08.617 "min_cntlid": 1, 00:21:08.617 "max_cntlid": 65519, 00:21:08.617 "namespaces": [ 00:21:08.617 { 00:21:08.617 "nsid": 1, 00:21:08.617 "bdev_name": "Malloc1", 00:21:08.617 "name": "Malloc1", 00:21:08.617 "nguid": "30E4AD52073F498FB3A7336DABC4C8BD", 00:21:08.617 "uuid": "30e4ad52-073f-498f-b3a7-336dabc4c8bd" 00:21:08.617 } 00:21:08.617 ] 00:21:08.617 }, 00:21:08.617 { 00:21:08.617 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:08.617 "subtype": "NVMe", 00:21:08.617 "listen_addresses": [ 00:21:08.617 { 00:21:08.617 "trtype": "VFIOUSER", 00:21:08.617 "adrfam": "IPv4", 00:21:08.617 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:08.617 "trsvcid": "0" 00:21:08.617 } 00:21:08.617 ], 00:21:08.617 "allow_any_host": true, 00:21:08.617 "hosts": [], 00:21:08.617 "serial_number": "SPDK2", 00:21:08.617 "model_number": "SPDK bdev Controller", 00:21:08.617 "max_namespaces": 32, 00:21:08.617 "min_cntlid": 1, 00:21:08.617 "max_cntlid": 65519, 00:21:08.617 "namespaces": [ 00:21:08.617 { 00:21:08.617 "nsid": 1, 00:21:08.617 "bdev_name": "Malloc2", 00:21:08.617 "name": "Malloc2", 00:21:08.617 "nguid": "761428205F194B74A57CF86C958B1CD8", 00:21:08.617 "uuid": "76142820-5f19-4b74-a57c-f86c958b1cd8" 00:21:08.618 } 00:21:08.618 ] 00:21:08.618 } 00:21:08.618 ] 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3130768 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:08.618 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:21:08.879 Malloc3 00:21:08.879 [2024-11-05 16:45:15.715151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:08.879 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:21:08.879 [2024-11-05 16:45:15.895376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:08.879 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:08.879 Asynchronous Event Request test 00:21:08.879 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:08.879 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:08.879 Registering asynchronous event callbacks... 00:21:08.879 Starting namespace attribute notice tests for all controllers... 00:21:08.879 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:08.879 aer_cb - Changed Namespace 00:21:08.879 Cleaning up... 00:21:09.151 [ 00:21:09.151 { 00:21:09.151 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:09.151 "subtype": "Discovery", 00:21:09.151 "listen_addresses": [], 00:21:09.151 "allow_any_host": true, 00:21:09.151 "hosts": [] 00:21:09.151 }, 00:21:09.151 { 00:21:09.151 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:09.151 "subtype": "NVMe", 00:21:09.151 "listen_addresses": [ 00:21:09.151 { 00:21:09.151 "trtype": "VFIOUSER", 00:21:09.151 "adrfam": "IPv4", 00:21:09.151 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:09.151 "trsvcid": "0" 00:21:09.151 } 00:21:09.151 ], 00:21:09.151 "allow_any_host": true, 00:21:09.151 "hosts": [], 00:21:09.151 "serial_number": "SPDK1", 00:21:09.151 "model_number": "SPDK bdev Controller", 00:21:09.151 "max_namespaces": 32, 00:21:09.151 "min_cntlid": 1, 00:21:09.151 "max_cntlid": 65519, 00:21:09.151 "namespaces": [ 00:21:09.151 { 00:21:09.151 "nsid": 1, 00:21:09.151 "bdev_name": "Malloc1", 00:21:09.151 "name": "Malloc1", 00:21:09.151 "nguid": "30E4AD52073F498FB3A7336DABC4C8BD", 00:21:09.151 "uuid": "30e4ad52-073f-498f-b3a7-336dabc4c8bd" 00:21:09.151 }, 00:21:09.151 { 00:21:09.151 "nsid": 2, 00:21:09.151 "bdev_name": "Malloc3", 00:21:09.151 "name": "Malloc3", 00:21:09.151 "nguid": "E9E53E07C20A4F3C812EBBE76851CACC", 00:21:09.151 "uuid": "e9e53e07-c20a-4f3c-812e-bbe76851cacc" 00:21:09.151 } 00:21:09.151 ] 00:21:09.151 }, 00:21:09.151 { 00:21:09.151 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:09.151 "subtype": "NVMe", 00:21:09.151 "listen_addresses": [ 00:21:09.151 { 00:21:09.151 "trtype": "VFIOUSER", 00:21:09.151 "adrfam": "IPv4", 00:21:09.151 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:09.151 "trsvcid": "0" 00:21:09.151 } 00:21:09.151 ], 00:21:09.151 "allow_any_host": true, 00:21:09.151 "hosts": [], 00:21:09.151 "serial_number": "SPDK2", 00:21:09.151 "model_number": "SPDK bdev Controller", 00:21:09.151 "max_namespaces": 32, 00:21:09.151 "min_cntlid": 1, 00:21:09.151 "max_cntlid": 65519, 00:21:09.151 "namespaces": [ 00:21:09.151 { 00:21:09.151 "nsid": 1, 00:21:09.151 "bdev_name": "Malloc2", 00:21:09.151 "name": "Malloc2", 00:21:09.151 "nguid": "761428205F194B74A57CF86C958B1CD8", 00:21:09.151 "uuid": "76142820-5f19-4b74-a57c-f86c958b1cd8" 00:21:09.151 } 00:21:09.151 ] 00:21:09.151 } 00:21:09.151 ] 00:21:09.151 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3130768 00:21:09.151 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:09.151 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:09.151 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:21:09.151 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:21:09.151 [2024-11-05 16:45:16.134693] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:21:09.151 [2024-11-05 16:45:16.134733] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130784 ] 00:21:09.151 [2024-11-05 16:45:16.186804] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:21:09.151 [2024-11-05 16:45:16.199987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:09.151 [2024-11-05 16:45:16.200010] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdb570eb000 00:21:09.151 [2024-11-05 16:45:16.200981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.201983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.202994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.204000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.205007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.206013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.207020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.208034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:09.151 [2024-11-05 16:45:16.209045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:09.151 [2024-11-05 16:45:16.209059] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdb570e0000 00:21:09.151 [2024-11-05 16:45:16.210384] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:09.414 [2024-11-05 16:45:16.226649] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:21:09.414 [2024-11-05 16:45:16.226676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:21:09.414 [2024-11-05 16:45:16.228730] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:09.414 [2024-11-05 16:45:16.228784] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:21:09.414 [2024-11-05 16:45:16.228868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:21:09.414 [2024-11-05 16:45:16.228883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:21:09.415 [2024-11-05 16:45:16.228889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:21:09.415 [2024-11-05 16:45:16.230752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:21:09.415 [2024-11-05 16:45:16.230763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:21:09.415 [2024-11-05 16:45:16.230771] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:21:09.415 [2024-11-05 16:45:16.231749] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:09.415 [2024-11-05 16:45:16.231759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:21:09.415 [2024-11-05 16:45:16.231767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:21:09.415 [2024-11-05 16:45:16.232759] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:21:09.415 [2024-11-05 16:45:16.232768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:09.415 [2024-11-05 16:45:16.233766] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:21:09.415 [2024-11-05 16:45:16.233776] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:21:09.415 [2024-11-05 16:45:16.233781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:21:09.415 [2024-11-05 16:45:16.233788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:09.415 [2024-11-05 16:45:16.233896] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:21:09.415 [2024-11-05 16:45:16.233901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:09.415 [2024-11-05 16:45:16.233907] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:21:09.415 [2024-11-05 16:45:16.234773] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:21:09.415 [2024-11-05 16:45:16.235778] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:21:09.415 [2024-11-05 16:45:16.236785] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:09.415 [2024-11-05 16:45:16.237782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:09.415 [2024-11-05 16:45:16.237822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:09.415 [2024-11-05 16:45:16.238791] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:21:09.415 [2024-11-05 16:45:16.238801] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:09.415 [2024-11-05 16:45:16.238806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.238828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:21:09.415 [2024-11-05 16:45:16.238836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.238849] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:09.415 [2024-11-05 16:45:16.238855] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:09.415 [2024-11-05 16:45:16.238860] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.415 [2024-11-05 16:45:16.238872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:09.415 [2024-11-05 16:45:16.249754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:21:09.415 [2024-11-05 16:45:16.249767] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:21:09.415 [2024-11-05 16:45:16.249772] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:21:09.415 [2024-11-05 16:45:16.249777] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:21:09.415 [2024-11-05 16:45:16.249781] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:21:09.415 [2024-11-05 16:45:16.249786] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:21:09.415 [2024-11-05 16:45:16.249794] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:21:09.415 [2024-11-05 16:45:16.249799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.249807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.249817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:21:09.415 [2024-11-05 16:45:16.257753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:21:09.415 [2024-11-05 16:45:16.257769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.415 [2024-11-05 16:45:16.257780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.415 [2024-11-05 16:45:16.257789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.415 [2024-11-05 16:45:16.257798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.415 [2024-11-05 16:45:16.257805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.257812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.257823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:21:09.415 [2024-11-05 16:45:16.265751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:21:09.415 [2024-11-05 16:45:16.265762] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:21:09.415 [2024-11-05 16:45:16.265767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.265774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.265780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.265792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:09.415 [2024-11-05 16:45:16.273753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:21:09.415 [2024-11-05 16:45:16.273821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.273829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.273837] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:21:09.415 [2024-11-05 16:45:16.273842] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:21:09.415 [2024-11-05 16:45:16.273845] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.415 [2024-11-05 16:45:16.273852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:21:09.415 [2024-11-05 16:45:16.281751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:21:09.415 [2024-11-05 16:45:16.281761] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:21:09.415 [2024-11-05 16:45:16.281774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.281783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:21:09.415 [2024-11-05 16:45:16.281790] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:09.416 [2024-11-05 16:45:16.281795] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:09.416 [2024-11-05 16:45:16.281798] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.416 [2024-11-05 16:45:16.281804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.289751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.289766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.289775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.289782] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:09.416 [2024-11-05 16:45:16.289787] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:09.416 [2024-11-05 16:45:16.289790] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.416 [2024-11-05 16:45:16.289797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.297751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.297761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297800] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:21:09.416 [2024-11-05 16:45:16.297805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:21:09.416 [2024-11-05 16:45:16.297811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:21:09.416 [2024-11-05 16:45:16.297827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.305751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.305765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.313752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.313766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.321753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.321767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.329751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.329768] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:21:09.416 [2024-11-05 16:45:16.329773] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:21:09.416 [2024-11-05 16:45:16.329777] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:21:09.416 [2024-11-05 16:45:16.329780] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:21:09.416 [2024-11-05 16:45:16.329784] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:21:09.416 [2024-11-05 16:45:16.329790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:21:09.416 [2024-11-05 16:45:16.329798] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:21:09.416 [2024-11-05 16:45:16.329802] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:21:09.416 [2024-11-05 16:45:16.329806] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.416 [2024-11-05 16:45:16.329812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.329819] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:21:09.416 [2024-11-05 16:45:16.329823] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:09.416 [2024-11-05 16:45:16.329829] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.416 [2024-11-05 16:45:16.329835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.329844] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:21:09.416 [2024-11-05 16:45:16.329849] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:21:09.416 [2024-11-05 16:45:16.329852] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:09.416 [2024-11-05 16:45:16.329858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:21:09.416 [2024-11-05 16:45:16.337753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.337768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.337779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:21:09.416 [2024-11-05 16:45:16.337786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:21:09.416 ===================================================== 00:21:09.416 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:09.416 ===================================================== 00:21:09.416 Controller Capabilities/Features 00:21:09.416 ================================ 00:21:09.416 Vendor ID: 4e58 00:21:09.416 Subsystem Vendor ID: 4e58 00:21:09.416 Serial Number: SPDK2 00:21:09.416 Model Number: SPDK bdev Controller 00:21:09.416 Firmware Version: 25.01 00:21:09.416 Recommended Arb Burst: 6 00:21:09.416 IEEE OUI Identifier: 8d 6b 50 00:21:09.416 Multi-path I/O 00:21:09.416 May have multiple subsystem ports: Yes 00:21:09.416 May have multiple controllers: Yes 00:21:09.416 Associated with SR-IOV VF: No 00:21:09.416 Max Data Transfer Size: 131072 00:21:09.416 Max Number of Namespaces: 32 00:21:09.416 Max Number of I/O Queues: 127 00:21:09.416 NVMe Specification Version (VS): 1.3 00:21:09.416 NVMe Specification Version (Identify): 1.3 00:21:09.416 Maximum Queue Entries: 256 00:21:09.416 Contiguous Queues Required: Yes 00:21:09.416 Arbitration Mechanisms Supported 00:21:09.416 Weighted Round Robin: Not Supported 00:21:09.416 Vendor Specific: Not Supported 00:21:09.416 Reset Timeout: 15000 ms 00:21:09.416 Doorbell Stride: 4 bytes 00:21:09.416 NVM Subsystem Reset: Not Supported 00:21:09.416 Command Sets Supported 00:21:09.416 NVM Command Set: Supported 00:21:09.416 Boot Partition: Not Supported 00:21:09.416 Memory Page Size Minimum: 4096 bytes 00:21:09.416 Memory Page Size Maximum: 4096 bytes 00:21:09.416 Persistent Memory Region: Not Supported 00:21:09.416 Optional Asynchronous Events Supported 00:21:09.416 Namespace Attribute Notices: Supported 00:21:09.416 Firmware Activation Notices: Not Supported 00:21:09.416 ANA Change Notices: Not Supported 00:21:09.416 PLE Aggregate Log Change Notices: Not Supported 00:21:09.416 LBA Status Info Alert Notices: Not Supported 00:21:09.416 EGE Aggregate Log Change Notices: Not Supported 00:21:09.416 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.416 Zone Descriptor Change Notices: Not Supported 00:21:09.416 Discovery Log Change Notices: Not Supported 00:21:09.416 Controller Attributes 00:21:09.417 128-bit Host Identifier: Supported 00:21:09.417 Non-Operational Permissive Mode: Not Supported 00:21:09.417 NVM Sets: Not Supported 00:21:09.417 Read Recovery Levels: Not Supported 00:21:09.417 Endurance Groups: Not Supported 00:21:09.417 Predictable Latency Mode: Not Supported 00:21:09.417 Traffic Based Keep ALive: Not Supported 00:21:09.417 Namespace Granularity: Not Supported 00:21:09.417 SQ Associations: Not Supported 00:21:09.417 UUID List: Not Supported 00:21:09.417 Multi-Domain Subsystem: Not Supported 00:21:09.417 Fixed Capacity Management: Not Supported 00:21:09.417 Variable Capacity Management: Not Supported 00:21:09.417 Delete Endurance Group: Not Supported 00:21:09.417 Delete NVM Set: Not Supported 00:21:09.417 Extended LBA Formats Supported: Not Supported 00:21:09.417 Flexible Data Placement Supported: Not Supported 00:21:09.417 00:21:09.417 Controller Memory Buffer Support 00:21:09.417 ================================ 00:21:09.417 Supported: No 00:21:09.417 00:21:09.417 Persistent Memory Region Support 00:21:09.417 ================================ 00:21:09.417 Supported: No 00:21:09.417 00:21:09.417 Admin Command Set Attributes 00:21:09.417 ============================ 00:21:09.417 Security Send/Receive: Not Supported 00:21:09.417 Format NVM: Not Supported 00:21:09.417 Firmware Activate/Download: Not Supported 00:21:09.417 Namespace Management: Not Supported 00:21:09.417 Device Self-Test: Not Supported 00:21:09.417 Directives: Not Supported 00:21:09.417 NVMe-MI: Not Supported 00:21:09.417 Virtualization Management: Not Supported 00:21:09.417 Doorbell Buffer Config: Not Supported 00:21:09.417 Get LBA Status Capability: Not Supported 00:21:09.417 Command & Feature Lockdown Capability: Not Supported 00:21:09.417 Abort Command Limit: 4 00:21:09.417 Async Event Request Limit: 4 00:21:09.417 Number of Firmware Slots: N/A 00:21:09.417 Firmware Slot 1 Read-Only: N/A 00:21:09.417 Firmware Activation Without Reset: N/A 00:21:09.417 Multiple Update Detection Support: N/A 00:21:09.417 Firmware Update Granularity: No Information Provided 00:21:09.417 Per-Namespace SMART Log: No 00:21:09.417 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.417 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:21:09.417 Command Effects Log Page: Supported 00:21:09.417 Get Log Page Extended Data: Supported 00:21:09.417 Telemetry Log Pages: Not Supported 00:21:09.417 Persistent Event Log Pages: Not Supported 00:21:09.417 Supported Log Pages Log Page: May Support 00:21:09.417 Commands Supported & Effects Log Page: Not Supported 00:21:09.417 Feature Identifiers & Effects Log Page:May Support 00:21:09.417 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.417 Data Area 4 for Telemetry Log: Not Supported 00:21:09.417 Error Log Page Entries Supported: 128 00:21:09.417 Keep Alive: Supported 00:21:09.417 Keep Alive Granularity: 10000 ms 00:21:09.417 00:21:09.417 NVM Command Set Attributes 00:21:09.417 ========================== 00:21:09.417 Submission Queue Entry Size 00:21:09.417 Max: 64 00:21:09.417 Min: 64 00:21:09.417 Completion Queue Entry Size 00:21:09.417 Max: 16 00:21:09.417 Min: 16 00:21:09.417 Number of Namespaces: 32 00:21:09.417 Compare Command: Supported 00:21:09.417 Write Uncorrectable Command: Not Supported 00:21:09.417 Dataset Management Command: Supported 00:21:09.417 Write Zeroes Command: Supported 00:21:09.417 Set Features Save Field: Not Supported 00:21:09.417 Reservations: Not Supported 00:21:09.417 Timestamp: Not Supported 00:21:09.417 Copy: Supported 00:21:09.417 Volatile Write Cache: Present 00:21:09.417 Atomic Write Unit (Normal): 1 00:21:09.417 Atomic Write Unit (PFail): 1 00:21:09.417 Atomic Compare & Write Unit: 1 00:21:09.417 Fused Compare & Write: Supported 00:21:09.417 Scatter-Gather List 00:21:09.417 SGL Command Set: Supported (Dword aligned) 00:21:09.417 SGL Keyed: Not Supported 00:21:09.417 SGL Bit Bucket Descriptor: Not Supported 00:21:09.417 SGL Metadata Pointer: Not Supported 00:21:09.417 Oversized SGL: Not Supported 00:21:09.417 SGL Metadata Address: Not Supported 00:21:09.417 SGL Offset: Not Supported 00:21:09.417 Transport SGL Data Block: Not Supported 00:21:09.417 Replay Protected Memory Block: Not Supported 00:21:09.417 00:21:09.417 Firmware Slot Information 00:21:09.417 ========================= 00:21:09.417 Active slot: 1 00:21:09.417 Slot 1 Firmware Revision: 25.01 00:21:09.417 00:21:09.417 00:21:09.417 Commands Supported and Effects 00:21:09.417 ============================== 00:21:09.417 Admin Commands 00:21:09.417 -------------- 00:21:09.417 Get Log Page (02h): Supported 00:21:09.417 Identify (06h): Supported 00:21:09.417 Abort (08h): Supported 00:21:09.417 Set Features (09h): Supported 00:21:09.417 Get Features (0Ah): Supported 00:21:09.417 Asynchronous Event Request (0Ch): Supported 00:21:09.417 Keep Alive (18h): Supported 00:21:09.417 I/O Commands 00:21:09.417 ------------ 00:21:09.417 Flush (00h): Supported LBA-Change 00:21:09.417 Write (01h): Supported LBA-Change 00:21:09.417 Read (02h): Supported 00:21:09.417 Compare (05h): Supported 00:21:09.417 Write Zeroes (08h): Supported LBA-Change 00:21:09.417 Dataset Management (09h): Supported LBA-Change 00:21:09.417 Copy (19h): Supported LBA-Change 00:21:09.417 00:21:09.417 Error Log 00:21:09.417 ========= 00:21:09.417 00:21:09.417 Arbitration 00:21:09.417 =========== 00:21:09.417 Arbitration Burst: 1 00:21:09.417 00:21:09.417 Power Management 00:21:09.417 ================ 00:21:09.417 Number of Power States: 1 00:21:09.417 Current Power State: Power State #0 00:21:09.417 Power State #0: 00:21:09.417 Max Power: 0.00 W 00:21:09.417 Non-Operational State: Operational 00:21:09.417 Entry Latency: Not Reported 00:21:09.417 Exit Latency: Not Reported 00:21:09.417 Relative Read Throughput: 0 00:21:09.417 Relative Read Latency: 0 00:21:09.417 Relative Write Throughput: 0 00:21:09.417 Relative Write Latency: 0 00:21:09.417 Idle Power: Not Reported 00:21:09.417 Active Power: Not Reported 00:21:09.417 Non-Operational Permissive Mode: Not Supported 00:21:09.417 00:21:09.417 Health Information 00:21:09.417 ================== 00:21:09.417 Critical Warnings: 00:21:09.417 Available Spare Space: OK 00:21:09.417 Temperature: OK 00:21:09.417 Device Reliability: OK 00:21:09.417 Read Only: No 00:21:09.417 Volatile Memory Backup: OK 00:21:09.417 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:09.417 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:09.417 Available Spare: 0% 00:21:09.417 Available Sp[2024-11-05 16:45:16.337892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:21:09.417 [2024-11-05 16:45:16.345752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:21:09.417 [2024-11-05 16:45:16.345784] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:21:09.417 [2024-11-05 16:45:16.345794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.417 [2024-11-05 16:45:16.345801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.417 [2024-11-05 16:45:16.345807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.417 [2024-11-05 16:45:16.345814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.417 [2024-11-05 16:45:16.345863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:09.417 [2024-11-05 16:45:16.345874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:21:09.417 [2024-11-05 16:45:16.346868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:09.417 [2024-11-05 16:45:16.346917] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:21:09.417 [2024-11-05 16:45:16.346924] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:21:09.417 [2024-11-05 16:45:16.347873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:21:09.417 [2024-11-05 16:45:16.347884] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:21:09.417 [2024-11-05 16:45:16.347932] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:21:09.418 [2024-11-05 16:45:16.349306] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:09.418 are Threshold: 0% 00:21:09.418 Life Percentage Used: 0% 00:21:09.418 Data Units Read: 0 00:21:09.418 Data Units Written: 0 00:21:09.418 Host Read Commands: 0 00:21:09.418 Host Write Commands: 0 00:21:09.418 Controller Busy Time: 0 minutes 00:21:09.418 Power Cycles: 0 00:21:09.418 Power On Hours: 0 hours 00:21:09.418 Unsafe Shutdowns: 0 00:21:09.418 Unrecoverable Media Errors: 0 00:21:09.418 Lifetime Error Log Entries: 0 00:21:09.418 Warning Temperature Time: 0 minutes 00:21:09.418 Critical Temperature Time: 0 minutes 00:21:09.418 00:21:09.418 Number of Queues 00:21:09.418 ================ 00:21:09.418 Number of I/O Submission Queues: 127 00:21:09.418 Number of I/O Completion Queues: 127 00:21:09.418 00:21:09.418 Active Namespaces 00:21:09.418 ================= 00:21:09.418 Namespace ID:1 00:21:09.418 Error Recovery Timeout: Unlimited 00:21:09.418 Command Set Identifier: NVM (00h) 00:21:09.418 Deallocate: Supported 00:21:09.418 Deallocated/Unwritten Error: Not Supported 00:21:09.418 Deallocated Read Value: Unknown 00:21:09.418 Deallocate in Write Zeroes: Not Supported 00:21:09.418 Deallocated Guard Field: 0xFFFF 00:21:09.418 Flush: Supported 00:21:09.418 Reservation: Supported 00:21:09.418 Namespace Sharing Capabilities: Multiple Controllers 00:21:09.418 Size (in LBAs): 131072 (0GiB) 00:21:09.418 Capacity (in LBAs): 131072 (0GiB) 00:21:09.418 Utilization (in LBAs): 131072 (0GiB) 00:21:09.418 NGUID: 761428205F194B74A57CF86C958B1CD8 00:21:09.418 UUID: 76142820-5f19-4b74-a57c-f86c958b1cd8 00:21:09.418 Thin Provisioning: Not Supported 00:21:09.418 Per-NS Atomic Units: Yes 00:21:09.418 Atomic Boundary Size (Normal): 0 00:21:09.418 Atomic Boundary Size (PFail): 0 00:21:09.418 Atomic Boundary Offset: 0 00:21:09.418 Maximum Single Source Range Length: 65535 00:21:09.418 Maximum Copy Length: 65535 00:21:09.418 Maximum Source Range Count: 1 00:21:09.418 NGUID/EUI64 Never Reused: No 00:21:09.418 Namespace Write Protected: No 00:21:09.418 Number of LBA Formats: 1 00:21:09.418 Current LBA Format: LBA Format #00 00:21:09.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:09.418 00:21:09.418 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:21:09.678 [2024-11-05 16:45:16.552836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:14.968 Initializing NVMe Controllers 00:21:14.968 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:14.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:14.968 Initialization complete. Launching workers. 00:21:14.968 ======================================================== 00:21:14.968 Latency(us) 00:21:14.968 Device Information : IOPS MiB/s Average min max 00:21:14.968 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39926.81 155.96 3205.74 844.02 10780.10 00:21:14.968 ======================================================== 00:21:14.968 Total : 39926.81 155.96 3205.74 844.02 10780.10 00:21:14.968 00:21:14.968 [2024-11-05 16:45:21.666939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:14.968 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:14.968 [2024-11-05 16:45:21.860570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:20.258 Initializing NVMe Controllers 00:21:20.258 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:20.258 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:20.258 Initialization complete. Launching workers. 00:21:20.258 ======================================================== 00:21:20.258 Latency(us) 00:21:20.258 Device Information : IOPS MiB/s Average min max 00:21:20.258 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35479.00 138.59 3607.43 1100.51 7664.78 00:21:20.258 ======================================================== 00:21:20.258 Total : 35479.00 138.59 3607.43 1100.51 7664.78 00:21:20.258 00:21:20.258 [2024-11-05 16:45:26.876957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:20.258 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:20.258 [2024-11-05 16:45:27.083185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:25.547 [2024-11-05 16:45:32.217833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:25.547 Initializing NVMe Controllers 00:21:25.547 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:25.547 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:25.547 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:21:25.547 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:21:25.547 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:21:25.547 Initialization complete. Launching workers. 00:21:25.547 Starting thread on core 2 00:21:25.547 Starting thread on core 3 00:21:25.547 Starting thread on core 1 00:21:25.547 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:21:25.547 [2024-11-05 16:45:32.499170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:28.850 [2024-11-05 16:45:35.568247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:28.850 Initializing NVMe Controllers 00:21:28.850 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:28.850 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:28.850 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:21:28.850 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:21:28.850 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:21:28.850 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:21:28.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:28.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:28.850 Initialization complete. Launching workers. 00:21:28.850 Starting thread on core 1 with urgent priority queue 00:21:28.850 Starting thread on core 2 with urgent priority queue 00:21:28.850 Starting thread on core 3 with urgent priority queue 00:21:28.850 Starting thread on core 0 with urgent priority queue 00:21:28.850 SPDK bdev Controller (SPDK2 ) core 0: 8080.33 IO/s 12.38 secs/100000 ios 00:21:28.850 SPDK bdev Controller (SPDK2 ) core 1: 14151.33 IO/s 7.07 secs/100000 ios 00:21:28.850 SPDK bdev Controller (SPDK2 ) core 2: 11418.00 IO/s 8.76 secs/100000 ios 00:21:28.850 SPDK bdev Controller (SPDK2 ) core 3: 8462.67 IO/s 11.82 secs/100000 ios 00:21:28.850 ======================================================== 00:21:28.850 00:21:28.850 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:28.850 [2024-11-05 16:45:35.857178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:28.850 Initializing NVMe Controllers 00:21:28.850 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:28.850 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:28.850 Namespace ID: 1 size: 0GB 00:21:28.850 Initialization complete. 00:21:28.850 INFO: using host memory buffer for IO 00:21:28.850 Hello world! 00:21:28.850 [2024-11-05 16:45:35.867233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:29.111 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:29.111 [2024-11-05 16:45:36.149274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:30.498 Initializing NVMe Controllers 00:21:30.498 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:30.498 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:30.498 Initialization complete. Launching workers. 00:21:30.498 submit (in ns) avg, min, max = 8899.0, 3906.7, 4004132.5 00:21:30.498 complete (in ns) avg, min, max = 17427.6, 2385.0, 4005544.2 00:21:30.498 00:21:30.498 Submit histogram 00:21:30.498 ================ 00:21:30.498 Range in us Cumulative Count 00:21:30.498 3.893 - 3.920: 0.6634% ( 126) 00:21:30.498 3.920 - 3.947: 6.6393% ( 1135) 00:21:30.498 3.947 - 3.973: 15.8953% ( 1758) 00:21:30.498 3.973 - 4.000: 26.1044% ( 1939) 00:21:30.498 4.000 - 4.027: 37.2822% ( 2123) 00:21:30.498 4.027 - 4.053: 46.5593% ( 1762) 00:21:30.498 4.053 - 4.080: 62.3914% ( 3007) 00:21:30.498 4.080 - 4.107: 78.6606% ( 3090) 00:21:30.498 4.107 - 4.133: 91.0967% ( 2362) 00:21:30.498 4.133 - 4.160: 96.4776% ( 1022) 00:21:30.498 4.160 - 4.187: 98.5363% ( 391) 00:21:30.498 4.187 - 4.213: 99.1997% ( 126) 00:21:30.498 4.213 - 4.240: 99.3735% ( 33) 00:21:30.498 4.240 - 4.267: 99.4156% ( 8) 00:21:30.498 4.267 - 4.293: 99.4261% ( 2) 00:21:30.498 4.373 - 4.400: 99.4314% ( 1) 00:21:30.498 4.400 - 4.427: 99.4366% ( 1) 00:21:30.498 4.667 - 4.693: 99.4472% ( 2) 00:21:30.498 4.747 - 4.773: 99.4524% ( 1) 00:21:30.498 4.800 - 4.827: 99.4577% ( 1) 00:21:30.498 5.013 - 5.040: 99.4630% ( 1) 00:21:30.498 5.093 - 5.120: 99.4682% ( 1) 00:21:30.498 5.413 - 5.440: 99.4735% ( 1) 00:21:30.498 5.547 - 5.573: 99.4788% ( 1) 00:21:30.498 5.573 - 5.600: 99.4840% ( 1) 00:21:30.498 5.600 - 5.627: 99.4893% ( 1) 00:21:30.498 5.653 - 5.680: 99.4946% ( 1) 00:21:30.498 5.973 - 6.000: 99.4998% ( 1) 00:21:30.498 6.000 - 6.027: 99.5051% ( 1) 00:21:30.498 6.053 - 6.080: 99.5103% ( 1) 00:21:30.498 6.080 - 6.107: 99.5209% ( 2) 00:21:30.498 6.160 - 6.187: 99.5261% ( 1) 00:21:30.498 6.187 - 6.213: 99.5367% ( 2) 00:21:30.498 6.213 - 6.240: 99.5419% ( 1) 00:21:30.498 6.240 - 6.267: 99.5525% ( 2) 00:21:30.498 6.293 - 6.320: 99.5630% ( 2) 00:21:30.498 6.320 - 6.347: 99.5683% ( 1) 00:21:30.498 6.347 - 6.373: 99.5788% ( 2) 00:21:30.498 6.373 - 6.400: 99.5893% ( 2) 00:21:30.498 6.427 - 6.453: 99.5946% ( 1) 00:21:30.498 6.453 - 6.480: 99.5999% ( 1) 00:21:30.498 6.480 - 6.507: 99.6051% ( 1) 00:21:30.498 6.533 - 6.560: 99.6156% ( 2) 00:21:30.498 6.560 - 6.587: 99.6209% ( 1) 00:21:30.498 6.587 - 6.613: 99.6262% ( 1) 00:21:30.498 6.613 - 6.640: 99.6314% ( 1) 00:21:30.498 6.640 - 6.667: 99.6420% ( 2) 00:21:30.498 6.667 - 6.693: 99.6525% ( 2) 00:21:30.498 6.693 - 6.720: 99.6578% ( 1) 00:21:30.498 6.720 - 6.747: 99.6736% ( 3) 00:21:30.498 6.800 - 6.827: 99.6788% ( 1) 00:21:30.498 6.827 - 6.880: 99.6999% ( 4) 00:21:30.498 6.880 - 6.933: 99.7157% ( 3) 00:21:30.498 6.933 - 6.987: 99.7367% ( 4) 00:21:30.498 6.987 - 7.040: 99.7525% ( 3) 00:21:30.498 7.040 - 7.093: 99.7631% ( 2) 00:21:30.498 7.093 - 7.147: 99.7789% ( 3) 00:21:30.498 7.147 - 7.200: 99.7894% ( 2) 00:21:30.498 7.200 - 7.253: 99.8052% ( 3) 00:21:30.498 7.307 - 7.360: 99.8157% ( 2) 00:21:30.498 7.413 - 7.467: 99.8210% ( 1) 00:21:30.498 7.467 - 7.520: 99.8263% ( 1) 00:21:30.498 7.627 - 7.680: 99.8368% ( 2) 00:21:30.498 7.680 - 7.733: 99.8420% ( 1) 00:21:30.498 7.787 - 7.840: 99.8473% ( 1) 00:21:30.498 8.000 - 8.053: 99.8526% ( 1) 00:21:30.498 8.053 - 8.107: 99.8631% ( 2) 00:21:30.498 8.107 - 8.160: 99.8684% ( 1) 00:21:30.498 8.533 - 8.587: 99.8736% ( 1) 00:21:30.498 9.387 - 9.440: 99.8789% ( 1) 00:21:30.498 3986.773 - 4014.080: 100.0000% ( 23) 00:21:30.498 00:21:30.498 Complete histogram 00:21:30.498 ================== 00:21:30.498 Range in us Cumulative Count 00:21:30.498 2.373 - 2.387: 0.0053% ( 1) 00:21:30.498 2.387 - 2.400: 0.3633% ( 68) 00:21:30.498 2.400 - 2.413: 0.6687% ( 58) 00:21:30.498 2.413 - 2.427: 0.7318% ( 12) 00:21:30.498 2.427 - 2.440: 0.8319% ( 19) 00:21:30.498 2.440 - 2.453: 0.8635% ( 6) 00:21:30.498 2.453 - [2024-11-05 16:45:37.244405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:30.498 2.467: 50.0553% ( 9343) 00:21:30.498 2.467 - 2.480: 62.2282% ( 2312) 00:21:30.498 2.480 - 2.493: 74.2642% ( 2286) 00:21:30.498 2.493 - 2.507: 79.1660% ( 931) 00:21:30.498 2.507 - 2.520: 81.2405% ( 394) 00:21:30.498 2.520 - 2.533: 84.0783% ( 539) 00:21:30.498 2.533 - 2.547: 89.8068% ( 1088) 00:21:30.498 2.547 - 2.560: 94.7191% ( 933) 00:21:30.498 2.560 - 2.573: 97.1726% ( 466) 00:21:30.498 2.573 - 2.587: 98.6521% ( 281) 00:21:30.498 2.587 - 2.600: 99.1628% ( 97) 00:21:30.498 2.600 - 2.613: 99.2892% ( 24) 00:21:30.498 2.613 - 2.627: 99.3577% ( 13) 00:21:30.498 2.627 - 2.640: 99.3840% ( 5) 00:21:30.498 2.867 - 2.880: 99.3892% ( 1) 00:21:30.498 4.533 - 4.560: 99.3945% ( 1) 00:21:30.498 4.587 - 4.613: 99.3998% ( 1) 00:21:30.498 4.640 - 4.667: 99.4050% ( 1) 00:21:30.498 4.693 - 4.720: 99.4103% ( 1) 00:21:30.498 4.880 - 4.907: 99.4208% ( 2) 00:21:30.498 4.907 - 4.933: 99.4366% ( 3) 00:21:30.498 4.987 - 5.013: 99.4419% ( 1) 00:21:30.498 5.013 - 5.040: 99.4472% ( 1) 00:21:30.498 5.040 - 5.067: 99.4524% ( 1) 00:21:30.498 5.067 - 5.093: 99.4577% ( 1) 00:21:30.498 5.120 - 5.147: 99.4788% ( 4) 00:21:30.498 5.173 - 5.200: 99.4893% ( 2) 00:21:30.498 5.200 - 5.227: 99.4946% ( 1) 00:21:30.498 5.413 - 5.440: 99.4998% ( 1) 00:21:30.498 5.467 - 5.493: 99.5051% ( 1) 00:21:30.498 5.493 - 5.520: 99.5156% ( 2) 00:21:30.498 5.547 - 5.573: 99.5261% ( 2) 00:21:30.498 5.573 - 5.600: 99.5314% ( 1) 00:21:30.498 5.627 - 5.653: 99.5419% ( 2) 00:21:30.498 5.653 - 5.680: 99.5472% ( 1) 00:21:30.498 5.707 - 5.733: 99.5577% ( 2) 00:21:30.499 5.893 - 5.920: 99.5735% ( 3) 00:21:30.499 5.947 - 5.973: 99.5788% ( 1) 00:21:30.499 5.973 - 6.000: 99.5841% ( 1) 00:21:30.499 6.160 - 6.187: 99.5893% ( 1) 00:21:30.499 6.213 - 6.240: 99.5946% ( 1) 00:21:30.499 6.293 - 6.320: 99.5999% ( 1) 00:21:30.499 6.400 - 6.427: 99.6051% ( 1) 00:21:30.499 6.427 - 6.453: 99.6104% ( 1) 00:21:30.499 6.453 - 6.480: 99.6156% ( 1) 00:21:30.499 6.987 - 7.040: 99.6209% ( 1) 00:21:30.499 7.467 - 7.520: 99.6262% ( 1) 00:21:30.499 3986.773 - 4014.080: 100.0000% ( 71) 00:21:30.499 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:30.499 [ 00:21:30.499 { 00:21:30.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:30.499 "subtype": "Discovery", 00:21:30.499 "listen_addresses": [], 00:21:30.499 "allow_any_host": true, 00:21:30.499 "hosts": [] 00:21:30.499 }, 00:21:30.499 { 00:21:30.499 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:30.499 "subtype": "NVMe", 00:21:30.499 "listen_addresses": [ 00:21:30.499 { 00:21:30.499 "trtype": "VFIOUSER", 00:21:30.499 "adrfam": "IPv4", 00:21:30.499 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:30.499 "trsvcid": "0" 00:21:30.499 } 00:21:30.499 ], 00:21:30.499 "allow_any_host": true, 00:21:30.499 "hosts": [], 00:21:30.499 "serial_number": "SPDK1", 00:21:30.499 "model_number": "SPDK bdev Controller", 00:21:30.499 "max_namespaces": 32, 00:21:30.499 "min_cntlid": 1, 00:21:30.499 "max_cntlid": 65519, 00:21:30.499 "namespaces": [ 00:21:30.499 { 00:21:30.499 "nsid": 1, 00:21:30.499 "bdev_name": "Malloc1", 00:21:30.499 "name": "Malloc1", 00:21:30.499 "nguid": "30E4AD52073F498FB3A7336DABC4C8BD", 00:21:30.499 "uuid": "30e4ad52-073f-498f-b3a7-336dabc4c8bd" 00:21:30.499 }, 00:21:30.499 { 00:21:30.499 "nsid": 2, 00:21:30.499 "bdev_name": "Malloc3", 00:21:30.499 "name": "Malloc3", 00:21:30.499 "nguid": "E9E53E07C20A4F3C812EBBE76851CACC", 00:21:30.499 "uuid": "e9e53e07-c20a-4f3c-812e-bbe76851cacc" 00:21:30.499 } 00:21:30.499 ] 00:21:30.499 }, 00:21:30.499 { 00:21:30.499 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:30.499 "subtype": "NVMe", 00:21:30.499 "listen_addresses": [ 00:21:30.499 { 00:21:30.499 "trtype": "VFIOUSER", 00:21:30.499 "adrfam": "IPv4", 00:21:30.499 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:30.499 "trsvcid": "0" 00:21:30.499 } 00:21:30.499 ], 00:21:30.499 "allow_any_host": true, 00:21:30.499 "hosts": [], 00:21:30.499 "serial_number": "SPDK2", 00:21:30.499 "model_number": "SPDK bdev Controller", 00:21:30.499 "max_namespaces": 32, 00:21:30.499 "min_cntlid": 1, 00:21:30.499 "max_cntlid": 65519, 00:21:30.499 "namespaces": [ 00:21:30.499 { 00:21:30.499 "nsid": 1, 00:21:30.499 "bdev_name": "Malloc2", 00:21:30.499 "name": "Malloc2", 00:21:30.499 "nguid": "761428205F194B74A57CF86C958B1CD8", 00:21:30.499 "uuid": "76142820-5f19-4b74-a57c-f86c958b1cd8" 00:21:30.499 } 00:21:30.499 ] 00:21:30.499 } 00:21:30.499 ] 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3135116 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:30.499 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:21:30.760 Malloc4 00:21:30.760 [2024-11-05 16:45:37.659637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:30.760 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:21:31.022 [2024-11-05 16:45:37.836826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:31.022 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:31.022 Asynchronous Event Request test 00:21:31.022 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:31.022 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:31.022 Registering asynchronous event callbacks... 00:21:31.022 Starting namespace attribute notice tests for all controllers... 00:21:31.022 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:31.022 aer_cb - Changed Namespace 00:21:31.022 Cleaning up... 00:21:31.022 [ 00:21:31.022 { 00:21:31.022 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:31.022 "subtype": "Discovery", 00:21:31.022 "listen_addresses": [], 00:21:31.022 "allow_any_host": true, 00:21:31.022 "hosts": [] 00:21:31.022 }, 00:21:31.022 { 00:21:31.022 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:31.022 "subtype": "NVMe", 00:21:31.022 "listen_addresses": [ 00:21:31.022 { 00:21:31.022 "trtype": "VFIOUSER", 00:21:31.022 "adrfam": "IPv4", 00:21:31.022 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:31.022 "trsvcid": "0" 00:21:31.022 } 00:21:31.022 ], 00:21:31.022 "allow_any_host": true, 00:21:31.022 "hosts": [], 00:21:31.022 "serial_number": "SPDK1", 00:21:31.022 "model_number": "SPDK bdev Controller", 00:21:31.022 "max_namespaces": 32, 00:21:31.022 "min_cntlid": 1, 00:21:31.022 "max_cntlid": 65519, 00:21:31.022 "namespaces": [ 00:21:31.022 { 00:21:31.022 "nsid": 1, 00:21:31.022 "bdev_name": "Malloc1", 00:21:31.022 "name": "Malloc1", 00:21:31.022 "nguid": "30E4AD52073F498FB3A7336DABC4C8BD", 00:21:31.022 "uuid": "30e4ad52-073f-498f-b3a7-336dabc4c8bd" 00:21:31.022 }, 00:21:31.022 { 00:21:31.022 "nsid": 2, 00:21:31.022 "bdev_name": "Malloc3", 00:21:31.022 "name": "Malloc3", 00:21:31.022 "nguid": "E9E53E07C20A4F3C812EBBE76851CACC", 00:21:31.022 "uuid": "e9e53e07-c20a-4f3c-812e-bbe76851cacc" 00:21:31.022 } 00:21:31.022 ] 00:21:31.022 }, 00:21:31.022 { 00:21:31.022 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:31.022 "subtype": "NVMe", 00:21:31.022 "listen_addresses": [ 00:21:31.022 { 00:21:31.022 "trtype": "VFIOUSER", 00:21:31.022 "adrfam": "IPv4", 00:21:31.022 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:31.022 "trsvcid": "0" 00:21:31.022 } 00:21:31.022 ], 00:21:31.022 "allow_any_host": true, 00:21:31.022 "hosts": [], 00:21:31.022 "serial_number": "SPDK2", 00:21:31.022 "model_number": "SPDK bdev Controller", 00:21:31.022 "max_namespaces": 32, 00:21:31.022 "min_cntlid": 1, 00:21:31.022 "max_cntlid": 65519, 00:21:31.022 "namespaces": [ 00:21:31.022 { 00:21:31.022 "nsid": 1, 00:21:31.022 "bdev_name": "Malloc2", 00:21:31.022 "name": "Malloc2", 00:21:31.022 "nguid": "761428205F194B74A57CF86C958B1CD8", 00:21:31.022 "uuid": "76142820-5f19-4b74-a57c-f86c958b1cd8" 00:21:31.022 }, 00:21:31.022 { 00:21:31.022 "nsid": 2, 00:21:31.022 "bdev_name": "Malloc4", 00:21:31.022 "name": "Malloc4", 00:21:31.022 "nguid": "FEC124928B8F43C18C99543083967C07", 00:21:31.022 "uuid": "fec12492-8b8f-43c1-8c99-543083967c07" 00:21:31.022 } 00:21:31.022 ] 00:21:31.022 } 00:21:31.022 ] 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3135116 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3125326 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3125326 ']' 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3125326 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.022 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3125326 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3125326' 00:21:31.283 killing process with pid 3125326 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3125326 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3125326 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3135147 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3135147' 00:21:31.283 Process pid: 3135147 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3135147 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3135147 ']' 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.283 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:31.283 [2024-11-05 16:45:38.323859] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:21:31.283 [2024-11-05 16:45:38.324778] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:21:31.283 [2024-11-05 16:45:38.324822] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.544 [2024-11-05 16:45:38.397879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.544 [2024-11-05 16:45:38.433836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.544 [2024-11-05 16:45:38.433874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.544 [2024-11-05 16:45:38.433882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.544 [2024-11-05 16:45:38.433888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.544 [2024-11-05 16:45:38.433894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.544 [2024-11-05 16:45:38.435394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.544 [2024-11-05 16:45:38.435510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.544 [2024-11-05 16:45:38.435667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.544 [2024-11-05 16:45:38.435668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.544 [2024-11-05 16:45:38.490563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:21:31.544 [2024-11-05 16:45:38.490617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:21:31.544 [2024-11-05 16:45:38.491592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:21:31.544 [2024-11-05 16:45:38.492249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:21:31.544 [2024-11-05 16:45:38.492368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:21:32.116 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.116 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:21:32.116 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:33.505 Malloc1 00:21:33.505 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:21:33.765 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:21:34.026 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:21:34.287 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:34.287 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:21:34.287 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:34.287 Malloc2 00:21:34.287 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:21:34.548 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3135147 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3135147 ']' 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3135147 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:34.808 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3135147 00:21:35.069 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:35.069 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:35.069 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3135147' 00:21:35.069 killing process with pid 3135147 00:21:35.069 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3135147 00:21:35.069 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3135147 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:35.069 00:21:35.069 real 0m51.431s 00:21:35.069 user 3m16.942s 00:21:35.069 sys 0m2.808s 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:35.069 ************************************ 00:21:35.069 END TEST nvmf_vfio_user 00:21:35.069 ************************************ 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:35.069 16:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.333 ************************************ 00:21:35.333 START TEST nvmf_vfio_user_nvme_compliance 00:21:35.333 ************************************ 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:35.333 * Looking for test storage... 00:21:35.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.333 --rc genhtml_branch_coverage=1 00:21:35.333 --rc genhtml_function_coverage=1 00:21:35.333 --rc genhtml_legend=1 00:21:35.333 --rc geninfo_all_blocks=1 00:21:35.333 --rc geninfo_unexecuted_blocks=1 00:21:35.333 00:21:35.333 ' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.333 --rc genhtml_branch_coverage=1 00:21:35.333 --rc genhtml_function_coverage=1 00:21:35.333 --rc genhtml_legend=1 00:21:35.333 --rc geninfo_all_blocks=1 00:21:35.333 --rc geninfo_unexecuted_blocks=1 00:21:35.333 00:21:35.333 ' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.333 --rc genhtml_branch_coverage=1 00:21:35.333 --rc genhtml_function_coverage=1 00:21:35.333 --rc genhtml_legend=1 00:21:35.333 --rc geninfo_all_blocks=1 00:21:35.333 --rc geninfo_unexecuted_blocks=1 00:21:35.333 00:21:35.333 ' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.333 --rc genhtml_branch_coverage=1 00:21:35.333 --rc genhtml_function_coverage=1 00:21:35.333 --rc genhtml_legend=1 00:21:35.333 --rc geninfo_all_blocks=1 00:21:35.333 --rc geninfo_unexecuted_blocks=1 00:21:35.333 00:21:35.333 ' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.333 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:35.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3136072 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3136072' 00:21:35.334 Process pid: 3136072 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3136072 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3136072 ']' 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:35.334 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 [2024-11-05 16:45:42.438196] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:21:35.595 [2024-11-05 16:45:42.438279] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.595 [2024-11-05 16:45:42.514593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:35.595 [2024-11-05 16:45:42.556073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.595 [2024-11-05 16:45:42.556111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.595 [2024-11-05 16:45:42.556119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.595 [2024-11-05 16:45:42.556126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.595 [2024-11-05 16:45:42.556132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.595 [2024-11-05 16:45:42.557707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.595 [2024-11-05 16:45:42.557853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.595 [2024-11-05 16:45:42.558026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.541 16:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:36.541 16:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:21:36.541 16:45:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:37.484 malloc0 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.484 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:21:37.484 00:21:37.484 00:21:37.484 CUnit - A unit testing framework for C - Version 2.1-3 00:21:37.484 http://cunit.sourceforge.net/ 00:21:37.484 00:21:37.484 00:21:37.484 Suite: nvme_compliance 00:21:37.484 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-05 16:45:44.523202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:37.484 [2024-11-05 16:45:44.524547] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:21:37.484 [2024-11-05 16:45:44.524558] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:21:37.484 [2024-11-05 16:45:44.524562] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:21:37.484 [2024-11-05 16:45:44.526224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:37.745 passed 00:21:37.745 Test: admin_identify_ctrlr_verify_fused ...[2024-11-05 16:45:44.618775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:37.745 [2024-11-05 16:45:44.621793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:37.745 passed 00:21:37.745 Test: admin_identify_ns ...[2024-11-05 16:45:44.720003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:37.745 [2024-11-05 16:45:44.779758] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:37.745 [2024-11-05 16:45:44.787758] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:21:37.745 [2024-11-05 16:45:44.808876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.007 passed 00:21:38.007 Test: admin_get_features_mandatory_features ...[2024-11-05 16:45:44.900489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.007 [2024-11-05 16:45:44.903508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.007 passed 00:21:38.007 Test: admin_get_features_optional_features ...[2024-11-05 16:45:44.997043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.007 [2024-11-05 16:45:45.000060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.007 passed 00:21:38.268 Test: admin_set_features_number_of_queues ...[2024-11-05 16:45:45.094236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.268 [2024-11-05 16:45:45.198842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.268 passed 00:21:38.268 Test: admin_get_log_page_mandatory_logs ...[2024-11-05 16:45:45.290851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.268 [2024-11-05 16:45:45.293869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.529 passed 00:21:38.529 Test: admin_get_log_page_with_lpo ...[2024-11-05 16:45:45.389004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.529 [2024-11-05 16:45:45.456766] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:21:38.529 [2024-11-05 16:45:45.469807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.529 passed 00:21:38.529 Test: fabric_property_get ...[2024-11-05 16:45:45.561401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.529 [2024-11-05 16:45:45.562645] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:21:38.529 [2024-11-05 16:45:45.564421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.790 passed 00:21:38.790 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-05 16:45:45.659016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.790 [2024-11-05 16:45:45.660273] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:21:38.790 [2024-11-05 16:45:45.662036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:38.790 passed 00:21:38.791 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-05 16:45:45.752194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:38.791 [2024-11-05 16:45:45.835757] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:38.791 [2024-11-05 16:45:45.851752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:39.052 [2024-11-05 16:45:45.856842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:39.052 passed 00:21:39.052 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-05 16:45:45.948446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:39.052 [2024-11-05 16:45:45.949701] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:21:39.052 [2024-11-05 16:45:45.951461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:39.052 passed 00:21:39.052 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-05 16:45:46.042585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:39.313 [2024-11-05 16:45:46.117753] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:39.313 [2024-11-05 16:45:46.141758] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:39.313 [2024-11-05 16:45:46.146843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:39.313 passed 00:21:39.313 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-05 16:45:46.240839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:39.313 [2024-11-05 16:45:46.242097] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:21:39.313 [2024-11-05 16:45:46.242117] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:21:39.313 [2024-11-05 16:45:46.243854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:39.313 passed 00:21:39.313 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-05 16:45:46.336994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:39.574 [2024-11-05 16:45:46.428752] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:21:39.574 [2024-11-05 16:45:46.436750] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:21:39.574 [2024-11-05 16:45:46.444750] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:21:39.574 [2024-11-05 16:45:46.452750] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:21:39.574 [2024-11-05 16:45:46.481835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:39.574 passed 00:21:39.574 Test: admin_create_io_sq_verify_pc ...[2024-11-05 16:45:46.575804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:39.574 [2024-11-05 16:45:46.591761] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:21:39.574 [2024-11-05 16:45:46.608963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:39.835 passed 00:21:39.835 Test: admin_create_io_qp_max_qps ...[2024-11-05 16:45:46.703479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:40.860 [2024-11-05 16:45:47.803758] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:21:41.167 [2024-11-05 16:45:48.178323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:41.167 passed 00:21:41.436 Test: admin_create_io_sq_shared_cq ...[2024-11-05 16:45:48.269459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:41.436 [2024-11-05 16:45:48.400757] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:41.436 [2024-11-05 16:45:48.437802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:41.436 passed 00:21:41.436 00:21:41.436 Run Summary: Type Total Ran Passed Failed Inactive 00:21:41.436 suites 1 1 n/a 0 0 00:21:41.436 tests 18 18 18 0 0 00:21:41.436 asserts 360 360 360 0 n/a 00:21:41.436 00:21:41.436 Elapsed time = 1.644 seconds 00:21:41.436 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3136072 00:21:41.436 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3136072 ']' 00:21:41.436 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3136072 00:21:41.436 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:21:41.436 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3136072 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3136072' 00:21:41.697 killing process with pid 3136072 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3136072 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3136072 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:41.697 00:21:41.697 real 0m6.551s 00:21:41.697 user 0m18.593s 00:21:41.697 sys 0m0.540s 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:41.697 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:41.697 ************************************ 00:21:41.697 END TEST nvmf_vfio_user_nvme_compliance 00:21:41.697 ************************************ 00:21:41.698 16:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:41.698 16:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:41.698 16:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.698 16:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:41.960 ************************************ 00:21:41.960 START TEST nvmf_vfio_user_fuzz 00:21:41.960 ************************************ 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:41.960 * Looking for test storage... 00:21:41.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.960 --rc genhtml_branch_coverage=1 00:21:41.960 --rc genhtml_function_coverage=1 00:21:41.960 --rc genhtml_legend=1 00:21:41.960 --rc geninfo_all_blocks=1 00:21:41.960 --rc geninfo_unexecuted_blocks=1 00:21:41.960 00:21:41.960 ' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.960 --rc genhtml_branch_coverage=1 00:21:41.960 --rc genhtml_function_coverage=1 00:21:41.960 --rc genhtml_legend=1 00:21:41.960 --rc geninfo_all_blocks=1 00:21:41.960 --rc geninfo_unexecuted_blocks=1 00:21:41.960 00:21:41.960 ' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.960 --rc genhtml_branch_coverage=1 00:21:41.960 --rc genhtml_function_coverage=1 00:21:41.960 --rc genhtml_legend=1 00:21:41.960 --rc geninfo_all_blocks=1 00:21:41.960 --rc geninfo_unexecuted_blocks=1 00:21:41.960 00:21:41.960 ' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:41.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.960 --rc genhtml_branch_coverage=1 00:21:41.960 --rc genhtml_function_coverage=1 00:21:41.960 --rc genhtml_legend=1 00:21:41.960 --rc geninfo_all_blocks=1 00:21:41.960 --rc geninfo_unexecuted_blocks=1 00:21:41.960 00:21:41.960 ' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:41.960 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:41.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:41.961 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3137336 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3137336' 00:21:41.961 Process pid: 3137336 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3137336 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3137336 ']' 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:41.961 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:42.905 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:42.905 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:21:42.905 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.851 malloc0 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:21:43.851 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:22:16.633 Fuzzing completed. Shutting down the fuzz application 00:22:16.633 00:22:16.633 Dumping successful admin opcodes: 00:22:16.633 8, 9, 10, 24, 00:22:16.633 Dumping successful io opcodes: 00:22:16.633 0, 00:22:16.633 NS: 0x20000081ef00 I/O qp, Total commands completed: 1118349, total successful commands: 4401, random_seed: 3679572864 00:22:16.633 NS: 0x20000081ef00 admin qp, Total commands completed: 140686, total successful commands: 1142, random_seed: 4187522496 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3137336 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3137336 ']' 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3137336 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3137336 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3137336' 00:22:16.633 killing process with pid 3137336 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3137336 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3137336 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:22:16.633 00:22:16.633 real 0m33.740s 00:22:16.633 user 0m37.846s 00:22:16.633 sys 0m25.840s 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:16.633 ************************************ 00:22:16.633 END TEST nvmf_vfio_user_fuzz 00:22:16.633 ************************************ 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.633 ************************************ 00:22:16.633 START TEST nvmf_auth_target 00:22:16.633 ************************************ 00:22:16.633 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:16.634 * Looking for test storage... 00:22:16.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:16.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.634 --rc genhtml_branch_coverage=1 00:22:16.634 --rc genhtml_function_coverage=1 00:22:16.634 --rc genhtml_legend=1 00:22:16.634 --rc geninfo_all_blocks=1 00:22:16.634 --rc geninfo_unexecuted_blocks=1 00:22:16.634 00:22:16.634 ' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:16.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.634 --rc genhtml_branch_coverage=1 00:22:16.634 --rc genhtml_function_coverage=1 00:22:16.634 --rc genhtml_legend=1 00:22:16.634 --rc geninfo_all_blocks=1 00:22:16.634 --rc geninfo_unexecuted_blocks=1 00:22:16.634 00:22:16.634 ' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:16.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.634 --rc genhtml_branch_coverage=1 00:22:16.634 --rc genhtml_function_coverage=1 00:22:16.634 --rc genhtml_legend=1 00:22:16.634 --rc geninfo_all_blocks=1 00:22:16.634 --rc geninfo_unexecuted_blocks=1 00:22:16.634 00:22:16.634 ' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:16.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.634 --rc genhtml_branch_coverage=1 00:22:16.634 --rc genhtml_function_coverage=1 00:22:16.634 --rc genhtml_legend=1 00:22:16.634 --rc geninfo_all_blocks=1 00:22:16.634 --rc geninfo_unexecuted_blocks=1 00:22:16.634 00:22:16.634 ' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:16.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:16.634 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:22:16.635 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:22:23.221 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:23.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:23.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:23.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:23.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@247 -- # create_target_ns 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:22:23.222 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:23.223 10.0.0.1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:23.223 10.0.0.2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:23.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.698 ms 00:22:23.223 00:22:23.223 --- 10.0.0.1 ping statistics --- 00:22:23.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.223 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:23.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:23.223 00:22:23.223 --- 10.0.0.2 ping statistics --- 00:22:23.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.223 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:22:23.223 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:22:23.224 ' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=3147644 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 3147644 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3147644 ']' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:23.224 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3147730 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=bda89e425c80171dd6d390689fab8ecdaf68f3be5ff1568a 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.ew5 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key bda89e425c80171dd6d390689fab8ecdaf68f3be5ff1568a 0 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 bda89e425c80171dd6d390689fab8ecdaf68f3be5ff1568a 0 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=bda89e425c80171dd6d390689fab8ecdaf68f3be5ff1568a 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.ew5 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.ew5 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ew5 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=3f17826703897a70f758675d4bf3b75916ac755e5dbae049b6f6dcb8340be11f 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.jGp 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 3f17826703897a70f758675d4bf3b75916ac755e5dbae049b6f6dcb8340be11f 3 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 3f17826703897a70f758675d4bf3b75916ac755e5dbae049b6f6dcb8340be11f 3 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=3f17826703897a70f758675d4bf3b75916ac755e5dbae049b6f6dcb8340be11f 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.jGp 00:22:23.795 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.jGp 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.jGp 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=6afa2e6e63591b3dfcd14ebf0a9ef86a 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.nhW 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 6afa2e6e63591b3dfcd14ebf0a9ef86a 1 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 6afa2e6e63591b3dfcd14ebf0a9ef86a 1 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=6afa2e6e63591b3dfcd14ebf0a9ef86a 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.nhW 00:22:23.796 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.nhW 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nhW 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=7dad4e9fec1e9b3a8042d5a954eeb807c033ae5116635a3f 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.w1x 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 7dad4e9fec1e9b3a8042d5a954eeb807c033ae5116635a3f 2 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 7dad4e9fec1e9b3a8042d5a954eeb807c033ae5116635a3f 2 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=7dad4e9fec1e9b3a8042d5a954eeb807c033ae5116635a3f 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.w1x 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.w1x 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.w1x 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=50f8f90de025ccc1ceab67e00273861c8629640230db5bfb 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.oPh 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 50f8f90de025ccc1ceab67e00273861c8629640230db5bfb 2 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 50f8f90de025ccc1ceab67e00273861c8629640230db5bfb 2 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=50f8f90de025ccc1ceab67e00273861c8629640230db5bfb 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.oPh 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.oPh 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.oPh 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:22:24.057 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:24.058 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=0496a3da7b58c924688d65b159c01fef 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.2pW 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 0496a3da7b58c924688d65b159c01fef 1 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 0496a3da7b58c924688d65b159c01fef 1 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=0496a3da7b58c924688d65b159c01fef 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.2pW 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.2pW 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.2pW 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a69f953c5c5142eacda6defff38afbcc04e856927e99c964ace4fac737a1abae 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.wei 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a69f953c5c5142eacda6defff38afbcc04e856927e99c964ace4fac737a1abae 3 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a69f953c5c5142eacda6defff38afbcc04e856927e99c964ace4fac737a1abae 3 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a69f953c5c5142eacda6defff38afbcc04e856927e99c964ace4fac737a1abae 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.wei 00:22:24.058 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.wei 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.wei 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3147644 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3147644 ']' 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3147730 /var/tmp/host.sock 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3147730 ']' 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:24.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.319 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ew5 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ew5 00:22:24.580 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ew5 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.jGp ]] 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jGp 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jGp 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jGp 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nhW 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nhW 00:22:24.840 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nhW 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.w1x ]] 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w1x 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w1x 00:22:25.101 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w1x 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oPh 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oPh 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oPh 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.2pW ]] 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2pW 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2pW 00:22:25.361 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2pW 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wei 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wei 00:22:25.622 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wei 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.883 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.143 00:22:26.143 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.143 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.143 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.406 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.406 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.407 { 00:22:26.407 "cntlid": 1, 00:22:26.407 "qid": 0, 00:22:26.407 "state": "enabled", 00:22:26.407 "thread": "nvmf_tgt_poll_group_000", 00:22:26.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:26.407 "listen_address": { 00:22:26.407 "trtype": "TCP", 00:22:26.407 "adrfam": "IPv4", 00:22:26.407 "traddr": "10.0.0.2", 00:22:26.407 "trsvcid": "4420" 00:22:26.407 }, 00:22:26.407 "peer_address": { 00:22:26.407 "trtype": "TCP", 00:22:26.407 "adrfam": "IPv4", 00:22:26.407 "traddr": "10.0.0.1", 00:22:26.407 "trsvcid": "34060" 00:22:26.407 }, 00:22:26.407 "auth": { 00:22:26.407 "state": "completed", 00:22:26.407 "digest": "sha256", 00:22:26.407 "dhgroup": "null" 00:22:26.407 } 00:22:26.407 } 00:22:26.407 ]' 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.407 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.667 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:26.667 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.608 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.869 00:22:27.869 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.869 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.869 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.130 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.130 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.130 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.130 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.130 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.130 { 00:22:28.130 "cntlid": 3, 00:22:28.130 "qid": 0, 00:22:28.130 "state": "enabled", 00:22:28.130 "thread": "nvmf_tgt_poll_group_000", 00:22:28.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:28.130 "listen_address": { 00:22:28.130 "trtype": "TCP", 00:22:28.130 "adrfam": "IPv4", 00:22:28.130 "traddr": "10.0.0.2", 00:22:28.130 "trsvcid": "4420" 00:22:28.130 }, 00:22:28.130 "peer_address": { 00:22:28.130 "trtype": "TCP", 00:22:28.130 "adrfam": "IPv4", 00:22:28.130 "traddr": "10.0.0.1", 00:22:28.130 "trsvcid": "34088" 00:22:28.130 }, 00:22:28.130 "auth": { 00:22:28.130 "state": "completed", 00:22:28.130 "digest": "sha256", 00:22:28.130 "dhgroup": "null" 00:22:28.130 } 00:22:28.130 } 00:22:28.130 ]' 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.130 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.390 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:28.390 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:29.332 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.333 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.593 00:22:29.593 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.593 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.593 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.853 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.853 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.853 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.854 { 00:22:29.854 "cntlid": 5, 00:22:29.854 "qid": 0, 00:22:29.854 "state": "enabled", 00:22:29.854 "thread": "nvmf_tgt_poll_group_000", 00:22:29.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:29.854 "listen_address": { 00:22:29.854 "trtype": "TCP", 00:22:29.854 "adrfam": "IPv4", 00:22:29.854 "traddr": "10.0.0.2", 00:22:29.854 "trsvcid": "4420" 00:22:29.854 }, 00:22:29.854 "peer_address": { 00:22:29.854 "trtype": "TCP", 00:22:29.854 "adrfam": "IPv4", 00:22:29.854 "traddr": "10.0.0.1", 00:22:29.854 "trsvcid": "34106" 00:22:29.854 }, 00:22:29.854 "auth": { 00:22:29.854 "state": "completed", 00:22:29.854 "digest": "sha256", 00:22:29.854 "dhgroup": "null" 00:22:29.854 } 00:22:29.854 } 00:22:29.854 ]' 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.854 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.114 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:30.114 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.055 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.056 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.316 00:22:31.316 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.316 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.316 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.576 { 00:22:31.576 "cntlid": 7, 00:22:31.576 "qid": 0, 00:22:31.576 "state": "enabled", 00:22:31.576 "thread": "nvmf_tgt_poll_group_000", 00:22:31.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:31.576 "listen_address": { 00:22:31.576 "trtype": "TCP", 00:22:31.576 "adrfam": "IPv4", 00:22:31.576 "traddr": "10.0.0.2", 00:22:31.576 "trsvcid": "4420" 00:22:31.576 }, 00:22:31.576 "peer_address": { 00:22:31.576 "trtype": "TCP", 00:22:31.576 "adrfam": "IPv4", 00:22:31.576 "traddr": "10.0.0.1", 00:22:31.576 "trsvcid": "51706" 00:22:31.576 }, 00:22:31.576 "auth": { 00:22:31.576 "state": "completed", 00:22:31.576 "digest": "sha256", 00:22:31.576 "dhgroup": "null" 00:22:31.576 } 00:22:31.576 } 00:22:31.576 ]' 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.576 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.835 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:31.835 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.775 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.035 00:22:33.035 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.035 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.035 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.296 { 00:22:33.296 "cntlid": 9, 00:22:33.296 "qid": 0, 00:22:33.296 "state": "enabled", 00:22:33.296 "thread": "nvmf_tgt_poll_group_000", 00:22:33.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:33.296 "listen_address": { 00:22:33.296 "trtype": "TCP", 00:22:33.296 "adrfam": "IPv4", 00:22:33.296 "traddr": "10.0.0.2", 00:22:33.296 "trsvcid": "4420" 00:22:33.296 }, 00:22:33.296 "peer_address": { 00:22:33.296 "trtype": "TCP", 00:22:33.296 "adrfam": "IPv4", 00:22:33.296 "traddr": "10.0.0.1", 00:22:33.296 "trsvcid": "51726" 00:22:33.296 }, 00:22:33.296 "auth": { 00:22:33.296 "state": "completed", 00:22:33.296 "digest": "sha256", 00:22:33.296 "dhgroup": "ffdhe2048" 00:22:33.296 } 00:22:33.296 } 00:22:33.296 ]' 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.296 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.557 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:33.557 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:34.127 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.388 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.676 00:22:34.676 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.676 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.676 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.936 { 00:22:34.936 "cntlid": 11, 00:22:34.936 "qid": 0, 00:22:34.936 "state": "enabled", 00:22:34.936 "thread": "nvmf_tgt_poll_group_000", 00:22:34.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:34.936 "listen_address": { 00:22:34.936 "trtype": "TCP", 00:22:34.936 "adrfam": "IPv4", 00:22:34.936 "traddr": "10.0.0.2", 00:22:34.936 "trsvcid": "4420" 00:22:34.936 }, 00:22:34.936 "peer_address": { 00:22:34.936 "trtype": "TCP", 00:22:34.936 "adrfam": "IPv4", 00:22:34.936 "traddr": "10.0.0.1", 00:22:34.936 "trsvcid": "51758" 00:22:34.936 }, 00:22:34.936 "auth": { 00:22:34.936 "state": "completed", 00:22:34.936 "digest": "sha256", 00:22:34.936 "dhgroup": "ffdhe2048" 00:22:34.936 } 00:22:34.936 } 00:22:34.936 ]' 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.936 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.197 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:35.197 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:36.139 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:36.139 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:36.139 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.139 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.140 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.401 00:22:36.401 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.401 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.401 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.662 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.663 { 00:22:36.663 "cntlid": 13, 00:22:36.663 "qid": 0, 00:22:36.663 "state": "enabled", 00:22:36.663 "thread": "nvmf_tgt_poll_group_000", 00:22:36.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:36.663 "listen_address": { 00:22:36.663 "trtype": "TCP", 00:22:36.663 "adrfam": "IPv4", 00:22:36.663 "traddr": "10.0.0.2", 00:22:36.663 "trsvcid": "4420" 00:22:36.663 }, 00:22:36.663 "peer_address": { 00:22:36.663 "trtype": "TCP", 00:22:36.663 "adrfam": "IPv4", 00:22:36.663 "traddr": "10.0.0.1", 00:22:36.663 "trsvcid": "51792" 00:22:36.663 }, 00:22:36.663 "auth": { 00:22:36.663 "state": "completed", 00:22:36.663 "digest": "sha256", 00:22:36.663 "dhgroup": "ffdhe2048" 00:22:36.663 } 00:22:36.663 } 00:22:36.663 ]' 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.663 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.923 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:36.924 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.868 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.129 00:22:38.129 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.129 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.129 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.390 { 00:22:38.390 "cntlid": 15, 00:22:38.390 "qid": 0, 00:22:38.390 "state": "enabled", 00:22:38.390 "thread": "nvmf_tgt_poll_group_000", 00:22:38.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.390 "listen_address": { 00:22:38.390 "trtype": "TCP", 00:22:38.390 "adrfam": "IPv4", 00:22:38.390 "traddr": "10.0.0.2", 00:22:38.390 "trsvcid": "4420" 00:22:38.390 }, 00:22:38.390 "peer_address": { 00:22:38.390 "trtype": "TCP", 00:22:38.390 "adrfam": "IPv4", 00:22:38.390 "traddr": "10.0.0.1", 00:22:38.390 "trsvcid": "51820" 00:22:38.390 }, 00:22:38.390 "auth": { 00:22:38.390 "state": "completed", 00:22:38.390 "digest": "sha256", 00:22:38.390 "dhgroup": "ffdhe2048" 00:22:38.390 } 00:22:38.390 } 00:22:38.390 ]' 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.390 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.650 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:38.650 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.597 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.858 00:22:39.858 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.858 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.858 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.118 { 00:22:40.118 "cntlid": 17, 00:22:40.118 "qid": 0, 00:22:40.118 "state": "enabled", 00:22:40.118 "thread": "nvmf_tgt_poll_group_000", 00:22:40.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:40.118 "listen_address": { 00:22:40.118 "trtype": "TCP", 00:22:40.118 "adrfam": "IPv4", 00:22:40.118 "traddr": "10.0.0.2", 00:22:40.118 "trsvcid": "4420" 00:22:40.118 }, 00:22:40.118 "peer_address": { 00:22:40.118 "trtype": "TCP", 00:22:40.118 "adrfam": "IPv4", 00:22:40.118 "traddr": "10.0.0.1", 00:22:40.118 "trsvcid": "51856" 00:22:40.118 }, 00:22:40.118 "auth": { 00:22:40.118 "state": "completed", 00:22:40.118 "digest": "sha256", 00:22:40.118 "dhgroup": "ffdhe3072" 00:22:40.118 } 00:22:40.118 } 00:22:40.118 ]' 00:22:40.118 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.118 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.379 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:40.379 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.321 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.322 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.582 00:22:41.582 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.582 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.582 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.843 { 00:22:41.843 "cntlid": 19, 00:22:41.843 "qid": 0, 00:22:41.843 "state": "enabled", 00:22:41.843 "thread": "nvmf_tgt_poll_group_000", 00:22:41.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:41.843 "listen_address": { 00:22:41.843 "trtype": "TCP", 00:22:41.843 "adrfam": "IPv4", 00:22:41.843 "traddr": "10.0.0.2", 00:22:41.843 "trsvcid": "4420" 00:22:41.843 }, 00:22:41.843 "peer_address": { 00:22:41.843 "trtype": "TCP", 00:22:41.843 "adrfam": "IPv4", 00:22:41.843 "traddr": "10.0.0.1", 00:22:41.843 "trsvcid": "53542" 00:22:41.843 }, 00:22:41.843 "auth": { 00:22:41.843 "state": "completed", 00:22:41.843 "digest": "sha256", 00:22:41.843 "dhgroup": "ffdhe3072" 00:22:41.843 } 00:22:41.843 } 00:22:41.843 ]' 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.843 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.844 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.844 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:41.844 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.844 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.844 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.844 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.104 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:42.104 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.046 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.047 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.307 00:22:43.307 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.307 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.307 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.568 { 00:22:43.568 "cntlid": 21, 00:22:43.568 "qid": 0, 00:22:43.568 "state": "enabled", 00:22:43.568 "thread": "nvmf_tgt_poll_group_000", 00:22:43.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.568 "listen_address": { 00:22:43.568 "trtype": "TCP", 00:22:43.568 "adrfam": "IPv4", 00:22:43.568 "traddr": "10.0.0.2", 00:22:43.568 "trsvcid": "4420" 00:22:43.568 }, 00:22:43.568 "peer_address": { 00:22:43.568 "trtype": "TCP", 00:22:43.568 "adrfam": "IPv4", 00:22:43.568 "traddr": "10.0.0.1", 00:22:43.568 "trsvcid": "53584" 00:22:43.568 }, 00:22:43.568 "auth": { 00:22:43.568 "state": "completed", 00:22:43.568 "digest": "sha256", 00:22:43.568 "dhgroup": "ffdhe3072" 00:22:43.568 } 00:22:43.568 } 00:22:43.568 ]' 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.568 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.829 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:43.829 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.772 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.033 00:22:45.033 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.033 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.033 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.033 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.033 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.033 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.033 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.293 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.293 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.293 { 00:22:45.293 "cntlid": 23, 00:22:45.293 "qid": 0, 00:22:45.293 "state": "enabled", 00:22:45.293 "thread": "nvmf_tgt_poll_group_000", 00:22:45.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:45.294 "listen_address": { 00:22:45.294 "trtype": "TCP", 00:22:45.294 "adrfam": "IPv4", 00:22:45.294 "traddr": "10.0.0.2", 00:22:45.294 "trsvcid": "4420" 00:22:45.294 }, 00:22:45.294 "peer_address": { 00:22:45.294 "trtype": "TCP", 00:22:45.294 "adrfam": "IPv4", 00:22:45.294 "traddr": "10.0.0.1", 00:22:45.294 "trsvcid": "53616" 00:22:45.294 }, 00:22:45.294 "auth": { 00:22:45.294 "state": "completed", 00:22:45.294 "digest": "sha256", 00:22:45.294 "dhgroup": "ffdhe3072" 00:22:45.294 } 00:22:45.294 } 00:22:45.294 ]' 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.294 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.554 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:45.554 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:46.127 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.388 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.649 00:22:46.649 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.650 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.650 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.910 { 00:22:46.910 "cntlid": 25, 00:22:46.910 "qid": 0, 00:22:46.910 "state": "enabled", 00:22:46.910 "thread": "nvmf_tgt_poll_group_000", 00:22:46.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:46.910 "listen_address": { 00:22:46.910 "trtype": "TCP", 00:22:46.910 "adrfam": "IPv4", 00:22:46.910 "traddr": "10.0.0.2", 00:22:46.910 "trsvcid": "4420" 00:22:46.910 }, 00:22:46.910 "peer_address": { 00:22:46.910 "trtype": "TCP", 00:22:46.910 "adrfam": "IPv4", 00:22:46.910 "traddr": "10.0.0.1", 00:22:46.910 "trsvcid": "53660" 00:22:46.910 }, 00:22:46.910 "auth": { 00:22:46.910 "state": "completed", 00:22:46.910 "digest": "sha256", 00:22:46.910 "dhgroup": "ffdhe4096" 00:22:46.910 } 00:22:46.910 } 00:22:46.910 ]' 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.910 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.911 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.911 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:46.911 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.170 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.170 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.170 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.170 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:47.170 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:48.112 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:48.112 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:48.112 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.112 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:48.112 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:48.112 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:48.112 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.113 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.373 00:22:48.373 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.373 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.373 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.634 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.635 { 00:22:48.635 "cntlid": 27, 00:22:48.635 "qid": 0, 00:22:48.635 "state": "enabled", 00:22:48.635 "thread": "nvmf_tgt_poll_group_000", 00:22:48.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:48.635 "listen_address": { 00:22:48.635 "trtype": "TCP", 00:22:48.635 "adrfam": "IPv4", 00:22:48.635 "traddr": "10.0.0.2", 00:22:48.635 "trsvcid": "4420" 00:22:48.635 }, 00:22:48.635 "peer_address": { 00:22:48.635 "trtype": "TCP", 00:22:48.635 "adrfam": "IPv4", 00:22:48.635 "traddr": "10.0.0.1", 00:22:48.635 "trsvcid": "53674" 00:22:48.635 }, 00:22:48.635 "auth": { 00:22:48.635 "state": "completed", 00:22:48.635 "digest": "sha256", 00:22:48.635 "dhgroup": "ffdhe4096" 00:22:48.635 } 00:22:48.635 } 00:22:48.635 ]' 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.635 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.895 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.895 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.895 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.895 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:48.895 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:49.837 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.838 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.838 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.838 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.098 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.098 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.098 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.098 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.098 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.359 { 00:22:50.359 "cntlid": 29, 00:22:50.359 "qid": 0, 00:22:50.359 "state": "enabled", 00:22:50.359 "thread": "nvmf_tgt_poll_group_000", 00:22:50.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:50.359 "listen_address": { 00:22:50.359 "trtype": "TCP", 00:22:50.359 "adrfam": "IPv4", 00:22:50.359 "traddr": "10.0.0.2", 00:22:50.359 "trsvcid": "4420" 00:22:50.359 }, 00:22:50.359 "peer_address": { 00:22:50.359 "trtype": "TCP", 00:22:50.359 "adrfam": "IPv4", 00:22:50.359 "traddr": "10.0.0.1", 00:22:50.359 "trsvcid": "53704" 00:22:50.359 }, 00:22:50.359 "auth": { 00:22:50.359 "state": "completed", 00:22:50.359 "digest": "sha256", 00:22:50.359 "dhgroup": "ffdhe4096" 00:22:50.359 } 00:22:50.359 } 00:22:50.359 ]' 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.359 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:50.691 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.670 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.931 00:22:51.931 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.931 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.931 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.192 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.192 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.192 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.192 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.192 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.192 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.192 { 00:22:52.192 "cntlid": 31, 00:22:52.192 "qid": 0, 00:22:52.192 "state": "enabled", 00:22:52.192 "thread": "nvmf_tgt_poll_group_000", 00:22:52.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:52.192 "listen_address": { 00:22:52.192 "trtype": "TCP", 00:22:52.192 "adrfam": "IPv4", 00:22:52.192 "traddr": "10.0.0.2", 00:22:52.192 "trsvcid": "4420" 00:22:52.192 }, 00:22:52.192 "peer_address": { 00:22:52.192 "trtype": "TCP", 00:22:52.192 "adrfam": "IPv4", 00:22:52.193 "traddr": "10.0.0.1", 00:22:52.193 "trsvcid": "33312" 00:22:52.193 }, 00:22:52.193 "auth": { 00:22:52.193 "state": "completed", 00:22:52.193 "digest": "sha256", 00:22:52.193 "dhgroup": "ffdhe4096" 00:22:52.193 } 00:22:52.193 } 00:22:52.193 ]' 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.193 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.453 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:52.453 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.403 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.404 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.664 00:22:53.664 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.664 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.664 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.924 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.924 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.925 { 00:22:53.925 "cntlid": 33, 00:22:53.925 "qid": 0, 00:22:53.925 "state": "enabled", 00:22:53.925 "thread": "nvmf_tgt_poll_group_000", 00:22:53.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:53.925 "listen_address": { 00:22:53.925 "trtype": "TCP", 00:22:53.925 "adrfam": "IPv4", 00:22:53.925 "traddr": "10.0.0.2", 00:22:53.925 "trsvcid": "4420" 00:22:53.925 }, 00:22:53.925 "peer_address": { 00:22:53.925 "trtype": "TCP", 00:22:53.925 "adrfam": "IPv4", 00:22:53.925 "traddr": "10.0.0.1", 00:22:53.925 "trsvcid": "33348" 00:22:53.925 }, 00:22:53.925 "auth": { 00:22:53.925 "state": "completed", 00:22:53.925 "digest": "sha256", 00:22:53.925 "dhgroup": "ffdhe6144" 00:22:53.925 } 00:22:53.925 } 00:22:53.925 ]' 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.925 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:54.185 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:55.126 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.126 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.697 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.697 { 00:22:55.697 "cntlid": 35, 00:22:55.697 "qid": 0, 00:22:55.697 "state": "enabled", 00:22:55.697 "thread": "nvmf_tgt_poll_group_000", 00:22:55.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:55.697 "listen_address": { 00:22:55.697 "trtype": "TCP", 00:22:55.697 "adrfam": "IPv4", 00:22:55.697 "traddr": "10.0.0.2", 00:22:55.697 "trsvcid": "4420" 00:22:55.697 }, 00:22:55.697 "peer_address": { 00:22:55.697 "trtype": "TCP", 00:22:55.697 "adrfam": "IPv4", 00:22:55.697 "traddr": "10.0.0.1", 00:22:55.697 "trsvcid": "33364" 00:22:55.697 }, 00:22:55.697 "auth": { 00:22:55.697 "state": "completed", 00:22:55.697 "digest": "sha256", 00:22:55.697 "dhgroup": "ffdhe6144" 00:22:55.697 } 00:22:55.697 } 00:22:55.697 ]' 00:22:55.697 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.958 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.220 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:56.220 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:56.793 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.053 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.054 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.054 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.314 00:22:57.314 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.314 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.314 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.574 { 00:22:57.574 "cntlid": 37, 00:22:57.574 "qid": 0, 00:22:57.574 "state": "enabled", 00:22:57.574 "thread": "nvmf_tgt_poll_group_000", 00:22:57.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.574 "listen_address": { 00:22:57.574 "trtype": "TCP", 00:22:57.574 "adrfam": "IPv4", 00:22:57.574 "traddr": "10.0.0.2", 00:22:57.574 "trsvcid": "4420" 00:22:57.574 }, 00:22:57.574 "peer_address": { 00:22:57.574 "trtype": "TCP", 00:22:57.574 "adrfam": "IPv4", 00:22:57.574 "traddr": "10.0.0.1", 00:22:57.574 "trsvcid": "33392" 00:22:57.574 }, 00:22:57.574 "auth": { 00:22:57.574 "state": "completed", 00:22:57.574 "digest": "sha256", 00:22:57.574 "dhgroup": "ffdhe6144" 00:22:57.574 } 00:22:57.574 } 00:22:57.574 ]' 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.574 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:57.835 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:22:58.775 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.776 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.346 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.346 { 00:22:59.346 "cntlid": 39, 00:22:59.346 "qid": 0, 00:22:59.346 "state": "enabled", 00:22:59.346 "thread": "nvmf_tgt_poll_group_000", 00:22:59.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:59.346 "listen_address": { 00:22:59.346 "trtype": "TCP", 00:22:59.346 "adrfam": "IPv4", 00:22:59.346 "traddr": "10.0.0.2", 00:22:59.346 "trsvcid": "4420" 00:22:59.346 }, 00:22:59.346 "peer_address": { 00:22:59.346 "trtype": "TCP", 00:22:59.346 "adrfam": "IPv4", 00:22:59.346 "traddr": "10.0.0.1", 00:22:59.346 "trsvcid": "33424" 00:22:59.346 }, 00:22:59.346 "auth": { 00:22:59.346 "state": "completed", 00:22:59.346 "digest": "sha256", 00:22:59.346 "dhgroup": "ffdhe6144" 00:22:59.346 } 00:22:59.346 } 00:22:59.346 ]' 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:59.346 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.607 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:59.607 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.607 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.607 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.607 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.868 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:22:59.868 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:00.439 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.700 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.271 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.271 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.533 { 00:23:01.533 "cntlid": 41, 00:23:01.533 "qid": 0, 00:23:01.533 "state": "enabled", 00:23:01.533 "thread": "nvmf_tgt_poll_group_000", 00:23:01.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.533 "listen_address": { 00:23:01.533 "trtype": "TCP", 00:23:01.533 "adrfam": "IPv4", 00:23:01.533 "traddr": "10.0.0.2", 00:23:01.533 "trsvcid": "4420" 00:23:01.533 }, 00:23:01.533 "peer_address": { 00:23:01.533 "trtype": "TCP", 00:23:01.533 "adrfam": "IPv4", 00:23:01.533 "traddr": "10.0.0.1", 00:23:01.533 "trsvcid": "33458" 00:23:01.533 }, 00:23:01.533 "auth": { 00:23:01.533 "state": "completed", 00:23:01.533 "digest": "sha256", 00:23:01.533 "dhgroup": "ffdhe8192" 00:23:01.533 } 00:23:01.533 } 00:23:01.533 ]' 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.533 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.794 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:01.794 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:02.366 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.366 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.366 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.366 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.628 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.201 00:23:03.201 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.201 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.201 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.462 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.462 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.462 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.463 { 00:23:03.463 "cntlid": 43, 00:23:03.463 "qid": 0, 00:23:03.463 "state": "enabled", 00:23:03.463 "thread": "nvmf_tgt_poll_group_000", 00:23:03.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:03.463 "listen_address": { 00:23:03.463 "trtype": "TCP", 00:23:03.463 "adrfam": "IPv4", 00:23:03.463 "traddr": "10.0.0.2", 00:23:03.463 "trsvcid": "4420" 00:23:03.463 }, 00:23:03.463 "peer_address": { 00:23:03.463 "trtype": "TCP", 00:23:03.463 "adrfam": "IPv4", 00:23:03.463 "traddr": "10.0.0.1", 00:23:03.463 "trsvcid": "34422" 00:23:03.463 }, 00:23:03.463 "auth": { 00:23:03.463 "state": "completed", 00:23:03.463 "digest": "sha256", 00:23:03.463 "dhgroup": "ffdhe8192" 00:23:03.463 } 00:23:03.463 } 00:23:03.463 ]' 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.463 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.724 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:03.724 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.668 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.669 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.240 00:23:05.240 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.240 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.240 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.501 { 00:23:05.501 "cntlid": 45, 00:23:05.501 "qid": 0, 00:23:05.501 "state": "enabled", 00:23:05.501 "thread": "nvmf_tgt_poll_group_000", 00:23:05.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:05.501 "listen_address": { 00:23:05.501 "trtype": "TCP", 00:23:05.501 "adrfam": "IPv4", 00:23:05.501 "traddr": "10.0.0.2", 00:23:05.501 "trsvcid": "4420" 00:23:05.501 }, 00:23:05.501 "peer_address": { 00:23:05.501 "trtype": "TCP", 00:23:05.501 "adrfam": "IPv4", 00:23:05.501 "traddr": "10.0.0.1", 00:23:05.501 "trsvcid": "34446" 00:23:05.501 }, 00:23:05.501 "auth": { 00:23:05.501 "state": "completed", 00:23:05.501 "digest": "sha256", 00:23:05.501 "dhgroup": "ffdhe8192" 00:23:05.501 } 00:23:05.501 } 00:23:05.501 ]' 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.501 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.763 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:05.763 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:06.706 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.706 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.706 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.706 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.707 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.278 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.278 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.278 { 00:23:07.278 "cntlid": 47, 00:23:07.278 "qid": 0, 00:23:07.278 "state": "enabled", 00:23:07.278 "thread": "nvmf_tgt_poll_group_000", 00:23:07.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:07.279 "listen_address": { 00:23:07.279 "trtype": "TCP", 00:23:07.279 "adrfam": "IPv4", 00:23:07.279 "traddr": "10.0.0.2", 00:23:07.279 "trsvcid": "4420" 00:23:07.279 }, 00:23:07.279 "peer_address": { 00:23:07.279 "trtype": "TCP", 00:23:07.279 "adrfam": "IPv4", 00:23:07.279 "traddr": "10.0.0.1", 00:23:07.279 "trsvcid": "34466" 00:23:07.279 }, 00:23:07.279 "auth": { 00:23:07.279 "state": "completed", 00:23:07.279 "digest": "sha256", 00:23:07.279 "dhgroup": "ffdhe8192" 00:23:07.279 } 00:23:07.279 } 00:23:07.279 ]' 00:23:07.279 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.540 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.801 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:07.801 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:08.371 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.632 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.892 00:23:08.892 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.892 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.892 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.154 { 00:23:09.154 "cntlid": 49, 00:23:09.154 "qid": 0, 00:23:09.154 "state": "enabled", 00:23:09.154 "thread": "nvmf_tgt_poll_group_000", 00:23:09.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:09.154 "listen_address": { 00:23:09.154 "trtype": "TCP", 00:23:09.154 "adrfam": "IPv4", 00:23:09.154 "traddr": "10.0.0.2", 00:23:09.154 "trsvcid": "4420" 00:23:09.154 }, 00:23:09.154 "peer_address": { 00:23:09.154 "trtype": "TCP", 00:23:09.154 "adrfam": "IPv4", 00:23:09.154 "traddr": "10.0.0.1", 00:23:09.154 "trsvcid": "34486" 00:23:09.154 }, 00:23:09.154 "auth": { 00:23:09.154 "state": "completed", 00:23:09.154 "digest": "sha384", 00:23:09.154 "dhgroup": "null" 00:23:09.154 } 00:23:09.154 } 00:23:09.154 ]' 00:23:09.154 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.154 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.415 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:09.415 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:09.985 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.985 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.985 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.985 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.245 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.506 00:23:10.506 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.506 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.506 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.767 { 00:23:10.767 "cntlid": 51, 00:23:10.767 "qid": 0, 00:23:10.767 "state": "enabled", 00:23:10.767 "thread": "nvmf_tgt_poll_group_000", 00:23:10.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:10.767 "listen_address": { 00:23:10.767 "trtype": "TCP", 00:23:10.767 "adrfam": "IPv4", 00:23:10.767 "traddr": "10.0.0.2", 00:23:10.767 "trsvcid": "4420" 00:23:10.767 }, 00:23:10.767 "peer_address": { 00:23:10.767 "trtype": "TCP", 00:23:10.767 "adrfam": "IPv4", 00:23:10.767 "traddr": "10.0.0.1", 00:23:10.767 "trsvcid": "34518" 00:23:10.767 }, 00:23:10.767 "auth": { 00:23:10.767 "state": "completed", 00:23:10.767 "digest": "sha384", 00:23:10.767 "dhgroup": "null" 00:23:10.767 } 00:23:10.767 } 00:23:10.767 ]' 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.767 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.028 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:11.028 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:11.969 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.970 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.230 00:23:12.230 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.230 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.230 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.491 { 00:23:12.491 "cntlid": 53, 00:23:12.491 "qid": 0, 00:23:12.491 "state": "enabled", 00:23:12.491 "thread": "nvmf_tgt_poll_group_000", 00:23:12.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:12.491 "listen_address": { 00:23:12.491 "trtype": "TCP", 00:23:12.491 "adrfam": "IPv4", 00:23:12.491 "traddr": "10.0.0.2", 00:23:12.491 "trsvcid": "4420" 00:23:12.491 }, 00:23:12.491 "peer_address": { 00:23:12.491 "trtype": "TCP", 00:23:12.491 "adrfam": "IPv4", 00:23:12.491 "traddr": "10.0.0.1", 00:23:12.491 "trsvcid": "47476" 00:23:12.491 }, 00:23:12.491 "auth": { 00:23:12.491 "state": "completed", 00:23:12.491 "digest": "sha384", 00:23:12.491 "dhgroup": "null" 00:23:12.491 } 00:23:12.491 } 00:23:12.491 ]' 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.491 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.751 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:12.751 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.692 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.952 00:23:13.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.213 { 00:23:14.213 "cntlid": 55, 00:23:14.213 "qid": 0, 00:23:14.213 "state": "enabled", 00:23:14.213 "thread": "nvmf_tgt_poll_group_000", 00:23:14.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:14.213 "listen_address": { 00:23:14.213 "trtype": "TCP", 00:23:14.213 "adrfam": "IPv4", 00:23:14.213 "traddr": "10.0.0.2", 00:23:14.213 "trsvcid": "4420" 00:23:14.213 }, 00:23:14.213 "peer_address": { 00:23:14.213 "trtype": "TCP", 00:23:14.213 "adrfam": "IPv4", 00:23:14.213 "traddr": "10.0.0.1", 00:23:14.213 "trsvcid": "47502" 00:23:14.213 }, 00:23:14.213 "auth": { 00:23:14.213 "state": "completed", 00:23:14.213 "digest": "sha384", 00:23:14.213 "dhgroup": "null" 00:23:14.213 } 00:23:14.213 } 00:23:14.213 ]' 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.213 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.473 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:14.473 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:15.413 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.413 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.413 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.413 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.413 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.413 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.414 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.674 00:23:15.674 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.674 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.674 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.935 { 00:23:15.935 "cntlid": 57, 00:23:15.935 "qid": 0, 00:23:15.935 "state": "enabled", 00:23:15.935 "thread": "nvmf_tgt_poll_group_000", 00:23:15.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:15.935 "listen_address": { 00:23:15.935 "trtype": "TCP", 00:23:15.935 "adrfam": "IPv4", 00:23:15.935 "traddr": "10.0.0.2", 00:23:15.935 "trsvcid": "4420" 00:23:15.935 }, 00:23:15.935 "peer_address": { 00:23:15.935 "trtype": "TCP", 00:23:15.935 "adrfam": "IPv4", 00:23:15.935 "traddr": "10.0.0.1", 00:23:15.935 "trsvcid": "47518" 00:23:15.935 }, 00:23:15.935 "auth": { 00:23:15.935 "state": "completed", 00:23:15.935 "digest": "sha384", 00:23:15.935 "dhgroup": "ffdhe2048" 00:23:15.935 } 00:23:15.935 } 00:23:15.935 ]' 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.935 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.194 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:16.194 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:16.763 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.023 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.023 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.023 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.023 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.023 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.024 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.024 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.024 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.284 00:23:17.284 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.284 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.284 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.543 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.544 { 00:23:17.544 "cntlid": 59, 00:23:17.544 "qid": 0, 00:23:17.544 "state": "enabled", 00:23:17.544 "thread": "nvmf_tgt_poll_group_000", 00:23:17.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:17.544 "listen_address": { 00:23:17.544 "trtype": "TCP", 00:23:17.544 "adrfam": "IPv4", 00:23:17.544 "traddr": "10.0.0.2", 00:23:17.544 "trsvcid": "4420" 00:23:17.544 }, 00:23:17.544 "peer_address": { 00:23:17.544 "trtype": "TCP", 00:23:17.544 "adrfam": "IPv4", 00:23:17.544 "traddr": "10.0.0.1", 00:23:17.544 "trsvcid": "47528" 00:23:17.544 }, 00:23:17.544 "auth": { 00:23:17.544 "state": "completed", 00:23:17.544 "digest": "sha384", 00:23:17.544 "dhgroup": "ffdhe2048" 00:23:17.544 } 00:23:17.544 } 00:23:17.544 ]' 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.544 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.803 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:17.803 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.743 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.002 00:23:19.002 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.002 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.002 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.262 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.262 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.262 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.262 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.262 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.262 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.262 { 00:23:19.262 "cntlid": 61, 00:23:19.262 "qid": 0, 00:23:19.262 "state": "enabled", 00:23:19.262 "thread": "nvmf_tgt_poll_group_000", 00:23:19.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:19.262 "listen_address": { 00:23:19.262 "trtype": "TCP", 00:23:19.262 "adrfam": "IPv4", 00:23:19.262 "traddr": "10.0.0.2", 00:23:19.262 "trsvcid": "4420" 00:23:19.262 }, 00:23:19.262 "peer_address": { 00:23:19.263 "trtype": "TCP", 00:23:19.263 "adrfam": "IPv4", 00:23:19.263 "traddr": "10.0.0.1", 00:23:19.263 "trsvcid": "47554" 00:23:19.263 }, 00:23:19.263 "auth": { 00:23:19.263 "state": "completed", 00:23:19.263 "digest": "sha384", 00:23:19.263 "dhgroup": "ffdhe2048" 00:23:19.263 } 00:23:19.263 } 00:23:19.263 ]' 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.263 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.522 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:19.522 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:20.094 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.094 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.354 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.614 00:23:20.614 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.614 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.614 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.874 { 00:23:20.874 "cntlid": 63, 00:23:20.874 "qid": 0, 00:23:20.874 "state": "enabled", 00:23:20.874 "thread": "nvmf_tgt_poll_group_000", 00:23:20.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:20.874 "listen_address": { 00:23:20.874 "trtype": "TCP", 00:23:20.874 "adrfam": "IPv4", 00:23:20.874 "traddr": "10.0.0.2", 00:23:20.874 "trsvcid": "4420" 00:23:20.874 }, 00:23:20.874 "peer_address": { 00:23:20.874 "trtype": "TCP", 00:23:20.874 "adrfam": "IPv4", 00:23:20.874 "traddr": "10.0.0.1", 00:23:20.874 "trsvcid": "47568" 00:23:20.874 }, 00:23:20.874 "auth": { 00:23:20.874 "state": "completed", 00:23:20.874 "digest": "sha384", 00:23:20.874 "dhgroup": "ffdhe2048" 00:23:20.874 } 00:23:20.874 } 00:23:20.874 ]' 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.874 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.135 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:21.135 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:22.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.076 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.336 00:23:22.336 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.336 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.336 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.596 { 00:23:22.596 "cntlid": 65, 00:23:22.596 "qid": 0, 00:23:22.596 "state": "enabled", 00:23:22.596 "thread": "nvmf_tgt_poll_group_000", 00:23:22.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:22.596 "listen_address": { 00:23:22.596 "trtype": "TCP", 00:23:22.596 "adrfam": "IPv4", 00:23:22.596 "traddr": "10.0.0.2", 00:23:22.596 "trsvcid": "4420" 00:23:22.596 }, 00:23:22.596 "peer_address": { 00:23:22.596 "trtype": "TCP", 00:23:22.596 "adrfam": "IPv4", 00:23:22.596 "traddr": "10.0.0.1", 00:23:22.596 "trsvcid": "33736" 00:23:22.596 }, 00:23:22.596 "auth": { 00:23:22.596 "state": "completed", 00:23:22.596 "digest": "sha384", 00:23:22.596 "dhgroup": "ffdhe3072" 00:23:22.596 } 00:23:22.596 } 00:23:22.596 ]' 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:22.596 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.597 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.597 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.597 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.856 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:22.856 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:23.428 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:23.688 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.689 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.950 00:23:23.950 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.950 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.950 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.210 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.210 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.210 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.210 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.210 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.210 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.210 { 00:23:24.210 "cntlid": 67, 00:23:24.210 "qid": 0, 00:23:24.210 "state": "enabled", 00:23:24.210 "thread": "nvmf_tgt_poll_group_000", 00:23:24.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:24.210 "listen_address": { 00:23:24.210 "trtype": "TCP", 00:23:24.210 "adrfam": "IPv4", 00:23:24.210 "traddr": "10.0.0.2", 00:23:24.210 "trsvcid": "4420" 00:23:24.210 }, 00:23:24.210 "peer_address": { 00:23:24.210 "trtype": "TCP", 00:23:24.210 "adrfam": "IPv4", 00:23:24.210 "traddr": "10.0.0.1", 00:23:24.211 "trsvcid": "33766" 00:23:24.211 }, 00:23:24.211 "auth": { 00:23:24.211 "state": "completed", 00:23:24.211 "digest": "sha384", 00:23:24.211 "dhgroup": "ffdhe3072" 00:23:24.211 } 00:23:24.211 } 00:23:24.211 ]' 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.211 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.472 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:24.472 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.415 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.676 00:23:25.676 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.676 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.676 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.938 { 00:23:25.938 "cntlid": 69, 00:23:25.938 "qid": 0, 00:23:25.938 "state": "enabled", 00:23:25.938 "thread": "nvmf_tgt_poll_group_000", 00:23:25.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:25.938 "listen_address": { 00:23:25.938 "trtype": "TCP", 00:23:25.938 "adrfam": "IPv4", 00:23:25.938 "traddr": "10.0.0.2", 00:23:25.938 "trsvcid": "4420" 00:23:25.938 }, 00:23:25.938 "peer_address": { 00:23:25.938 "trtype": "TCP", 00:23:25.938 "adrfam": "IPv4", 00:23:25.938 "traddr": "10.0.0.1", 00:23:25.938 "trsvcid": "33796" 00:23:25.938 }, 00:23:25.938 "auth": { 00:23:25.938 "state": "completed", 00:23:25.938 "digest": "sha384", 00:23:25.938 "dhgroup": "ffdhe3072" 00:23:25.938 } 00:23:25.938 } 00:23:25.938 ]' 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.938 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.197 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:26.197 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:27.139 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:27.399 00:23:27.399 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.399 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.399 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.659 { 00:23:27.659 "cntlid": 71, 00:23:27.659 "qid": 0, 00:23:27.659 "state": "enabled", 00:23:27.659 "thread": "nvmf_tgt_poll_group_000", 00:23:27.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:27.659 "listen_address": { 00:23:27.659 "trtype": "TCP", 00:23:27.659 "adrfam": "IPv4", 00:23:27.659 "traddr": "10.0.0.2", 00:23:27.659 "trsvcid": "4420" 00:23:27.659 }, 00:23:27.659 "peer_address": { 00:23:27.659 "trtype": "TCP", 00:23:27.659 "adrfam": "IPv4", 00:23:27.659 "traddr": "10.0.0.1", 00:23:27.659 "trsvcid": "33812" 00:23:27.659 }, 00:23:27.659 "auth": { 00:23:27.659 "state": "completed", 00:23:27.659 "digest": "sha384", 00:23:27.659 "dhgroup": "ffdhe3072" 00:23:27.659 } 00:23:27.659 } 00:23:27.659 ]' 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.659 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.919 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:27.919 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.860 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.120 00:23:29.120 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.120 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.120 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.380 { 00:23:29.380 "cntlid": 73, 00:23:29.380 "qid": 0, 00:23:29.380 "state": "enabled", 00:23:29.380 "thread": "nvmf_tgt_poll_group_000", 00:23:29.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:29.380 "listen_address": { 00:23:29.380 "trtype": "TCP", 00:23:29.380 "adrfam": "IPv4", 00:23:29.380 "traddr": "10.0.0.2", 00:23:29.380 "trsvcid": "4420" 00:23:29.380 }, 00:23:29.380 "peer_address": { 00:23:29.380 "trtype": "TCP", 00:23:29.380 "adrfam": "IPv4", 00:23:29.380 "traddr": "10.0.0.1", 00:23:29.380 "trsvcid": "33844" 00:23:29.380 }, 00:23:29.380 "auth": { 00:23:29.380 "state": "completed", 00:23:29.380 "digest": "sha384", 00:23:29.380 "dhgroup": "ffdhe4096" 00:23:29.380 } 00:23:29.380 } 00:23:29.380 ]' 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.380 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.640 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:29.640 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:30.217 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.217 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.217 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.217 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.531 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.846 00:23:30.846 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.846 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.846 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.141 { 00:23:31.141 "cntlid": 75, 00:23:31.141 "qid": 0, 00:23:31.141 "state": "enabled", 00:23:31.141 "thread": "nvmf_tgt_poll_group_000", 00:23:31.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:31.141 "listen_address": { 00:23:31.141 "trtype": "TCP", 00:23:31.141 "adrfam": "IPv4", 00:23:31.141 "traddr": "10.0.0.2", 00:23:31.141 "trsvcid": "4420" 00:23:31.141 }, 00:23:31.141 "peer_address": { 00:23:31.141 "trtype": "TCP", 00:23:31.141 "adrfam": "IPv4", 00:23:31.141 "traddr": "10.0.0.1", 00:23:31.141 "trsvcid": "33868" 00:23:31.141 }, 00:23:31.141 "auth": { 00:23:31.141 "state": "completed", 00:23:31.141 "digest": "sha384", 00:23:31.141 "dhgroup": "ffdhe4096" 00:23:31.141 } 00:23:31.141 } 00:23:31.141 ]' 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:31.141 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.141 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:31.141 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.141 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.141 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.141 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.403 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:31.403 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:31.975 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:31.975 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.236 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.497 00:23:32.497 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.497 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.497 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.757 { 00:23:32.757 "cntlid": 77, 00:23:32.757 "qid": 0, 00:23:32.757 "state": "enabled", 00:23:32.757 "thread": "nvmf_tgt_poll_group_000", 00:23:32.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:32.757 "listen_address": { 00:23:32.757 "trtype": "TCP", 00:23:32.757 "adrfam": "IPv4", 00:23:32.757 "traddr": "10.0.0.2", 00:23:32.757 "trsvcid": "4420" 00:23:32.757 }, 00:23:32.757 "peer_address": { 00:23:32.757 "trtype": "TCP", 00:23:32.757 "adrfam": "IPv4", 00:23:32.757 "traddr": "10.0.0.1", 00:23:32.757 "trsvcid": "35272" 00:23:32.757 }, 00:23:32.757 "auth": { 00:23:32.757 "state": "completed", 00:23:32.757 "digest": "sha384", 00:23:32.757 "dhgroup": "ffdhe4096" 00:23:32.757 } 00:23:32.757 } 00:23:32.757 ]' 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.757 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.017 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:33.017 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.958 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:33.959 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:34.219 00:23:34.219 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.219 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.219 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.479 { 00:23:34.479 "cntlid": 79, 00:23:34.479 "qid": 0, 00:23:34.479 "state": "enabled", 00:23:34.479 "thread": "nvmf_tgt_poll_group_000", 00:23:34.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:34.479 "listen_address": { 00:23:34.479 "trtype": "TCP", 00:23:34.479 "adrfam": "IPv4", 00:23:34.479 "traddr": "10.0.0.2", 00:23:34.479 "trsvcid": "4420" 00:23:34.479 }, 00:23:34.479 "peer_address": { 00:23:34.479 "trtype": "TCP", 00:23:34.479 "adrfam": "IPv4", 00:23:34.479 "traddr": "10.0.0.1", 00:23:34.479 "trsvcid": "35294" 00:23:34.479 }, 00:23:34.479 "auth": { 00:23:34.479 "state": "completed", 00:23:34.479 "digest": "sha384", 00:23:34.479 "dhgroup": "ffdhe4096" 00:23:34.479 } 00:23:34.479 } 00:23:34.479 ]' 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.479 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.740 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:34.740 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.681 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.682 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.253 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.253 { 00:23:36.253 "cntlid": 81, 00:23:36.253 "qid": 0, 00:23:36.253 "state": "enabled", 00:23:36.253 "thread": "nvmf_tgt_poll_group_000", 00:23:36.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:36.253 "listen_address": { 00:23:36.253 "trtype": "TCP", 00:23:36.253 "adrfam": "IPv4", 00:23:36.253 "traddr": "10.0.0.2", 00:23:36.253 "trsvcid": "4420" 00:23:36.253 }, 00:23:36.253 "peer_address": { 00:23:36.253 "trtype": "TCP", 00:23:36.253 "adrfam": "IPv4", 00:23:36.253 "traddr": "10.0.0.1", 00:23:36.253 "trsvcid": "35314" 00:23:36.253 }, 00:23:36.253 "auth": { 00:23:36.253 "state": "completed", 00:23:36.253 "digest": "sha384", 00:23:36.253 "dhgroup": "ffdhe6144" 00:23:36.253 } 00:23:36.253 } 00:23:36.253 ]' 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.253 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:36.519 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:37.466 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.467 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.038 00:23:38.038 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.038 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.038 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.038 { 00:23:38.038 "cntlid": 83, 00:23:38.038 "qid": 0, 00:23:38.038 "state": "enabled", 00:23:38.038 "thread": "nvmf_tgt_poll_group_000", 00:23:38.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:38.038 "listen_address": { 00:23:38.038 "trtype": "TCP", 00:23:38.038 "adrfam": "IPv4", 00:23:38.038 "traddr": "10.0.0.2", 00:23:38.038 "trsvcid": "4420" 00:23:38.038 }, 00:23:38.038 "peer_address": { 00:23:38.038 "trtype": "TCP", 00:23:38.038 "adrfam": "IPv4", 00:23:38.038 "traddr": "10.0.0.1", 00:23:38.038 "trsvcid": "35326" 00:23:38.038 }, 00:23:38.038 "auth": { 00:23:38.038 "state": "completed", 00:23:38.038 "digest": "sha384", 00:23:38.038 "dhgroup": "ffdhe6144" 00:23:38.038 } 00:23:38.038 } 00:23:38.038 ]' 00:23:38.038 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.298 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.559 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:38.559 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:39.129 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.129 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.129 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.129 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.129 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.129 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.130 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.130 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.390 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.650 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.911 { 00:23:39.911 "cntlid": 85, 00:23:39.911 "qid": 0, 00:23:39.911 "state": "enabled", 00:23:39.911 "thread": "nvmf_tgt_poll_group_000", 00:23:39.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:39.911 "listen_address": { 00:23:39.911 "trtype": "TCP", 00:23:39.911 "adrfam": "IPv4", 00:23:39.911 "traddr": "10.0.0.2", 00:23:39.911 "trsvcid": "4420" 00:23:39.911 }, 00:23:39.911 "peer_address": { 00:23:39.911 "trtype": "TCP", 00:23:39.911 "adrfam": "IPv4", 00:23:39.911 "traddr": "10.0.0.1", 00:23:39.911 "trsvcid": "35364" 00:23:39.911 }, 00:23:39.911 "auth": { 00:23:39.911 "state": "completed", 00:23:39.911 "digest": "sha384", 00:23:39.911 "dhgroup": "ffdhe6144" 00:23:39.911 } 00:23:39.911 } 00:23:39.911 ]' 00:23:39.911 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.171 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:40.171 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.171 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:40.171 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:40.171 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.171 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.171 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.431 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:40.431 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:41.002 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.003 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:41.264 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:41.525 00:23:41.525 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.525 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.525 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.787 { 00:23:41.787 "cntlid": 87, 00:23:41.787 "qid": 0, 00:23:41.787 "state": "enabled", 00:23:41.787 "thread": "nvmf_tgt_poll_group_000", 00:23:41.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:41.787 "listen_address": { 00:23:41.787 "trtype": "TCP", 00:23:41.787 "adrfam": "IPv4", 00:23:41.787 "traddr": "10.0.0.2", 00:23:41.787 "trsvcid": "4420" 00:23:41.787 }, 00:23:41.787 "peer_address": { 00:23:41.787 "trtype": "TCP", 00:23:41.787 "adrfam": "IPv4", 00:23:41.787 "traddr": "10.0.0.1", 00:23:41.787 "trsvcid": "57310" 00:23:41.787 }, 00:23:41.787 "auth": { 00:23:41.787 "state": "completed", 00:23:41.787 "digest": "sha384", 00:23:41.787 "dhgroup": "ffdhe6144" 00:23:41.787 } 00:23:41.787 } 00:23:41.787 ]' 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.787 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:42.048 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:42.048 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:42.048 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.048 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.048 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.048 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:42.048 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:42.991 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.991 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.565 00:23:43.565 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.565 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.565 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.826 { 00:23:43.826 "cntlid": 89, 00:23:43.826 "qid": 0, 00:23:43.826 "state": "enabled", 00:23:43.826 "thread": "nvmf_tgt_poll_group_000", 00:23:43.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:43.826 "listen_address": { 00:23:43.826 "trtype": "TCP", 00:23:43.826 "adrfam": "IPv4", 00:23:43.826 "traddr": "10.0.0.2", 00:23:43.826 "trsvcid": "4420" 00:23:43.826 }, 00:23:43.826 "peer_address": { 00:23:43.826 "trtype": "TCP", 00:23:43.826 "adrfam": "IPv4", 00:23:43.826 "traddr": "10.0.0.1", 00:23:43.826 "trsvcid": "57334" 00:23:43.826 }, 00:23:43.826 "auth": { 00:23:43.826 "state": "completed", 00:23:43.826 "digest": "sha384", 00:23:43.826 "dhgroup": "ffdhe8192" 00:23:43.826 } 00:23:43.826 } 00:23:43.826 ]' 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:43.826 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:44.088 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.088 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.088 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.088 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:44.088 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.030 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.030 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.602 00:23:45.602 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:45.602 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:45.602 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:45.864 { 00:23:45.864 "cntlid": 91, 00:23:45.864 "qid": 0, 00:23:45.864 "state": "enabled", 00:23:45.864 "thread": "nvmf_tgt_poll_group_000", 00:23:45.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:45.864 "listen_address": { 00:23:45.864 "trtype": "TCP", 00:23:45.864 "adrfam": "IPv4", 00:23:45.864 "traddr": "10.0.0.2", 00:23:45.864 "trsvcid": "4420" 00:23:45.864 }, 00:23:45.864 "peer_address": { 00:23:45.864 "trtype": "TCP", 00:23:45.864 "adrfam": "IPv4", 00:23:45.864 "traddr": "10.0.0.1", 00:23:45.864 "trsvcid": "57370" 00:23:45.864 }, 00:23:45.864 "auth": { 00:23:45.864 "state": "completed", 00:23:45.864 "digest": "sha384", 00:23:45.864 "dhgroup": "ffdhe8192" 00:23:45.864 } 00:23:45.864 } 00:23:45.864 ]' 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.864 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.124 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:46.124 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.068 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.068 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.639 00:23:47.639 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.639 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.639 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.899 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.899 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.899 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.899 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.899 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.899 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.899 { 00:23:47.899 "cntlid": 93, 00:23:47.899 "qid": 0, 00:23:47.899 "state": "enabled", 00:23:47.899 "thread": "nvmf_tgt_poll_group_000", 00:23:47.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:47.900 "listen_address": { 00:23:47.900 "trtype": "TCP", 00:23:47.900 "adrfam": "IPv4", 00:23:47.900 "traddr": "10.0.0.2", 00:23:47.900 "trsvcid": "4420" 00:23:47.900 }, 00:23:47.900 "peer_address": { 00:23:47.900 "trtype": "TCP", 00:23:47.900 "adrfam": "IPv4", 00:23:47.900 "traddr": "10.0.0.1", 00:23:47.900 "trsvcid": "57398" 00:23:47.900 }, 00:23:47.900 "auth": { 00:23:47.900 "state": "completed", 00:23:47.900 "digest": "sha384", 00:23:47.900 "dhgroup": "ffdhe8192" 00:23:47.900 } 00:23:47.900 } 00:23:47.900 ]' 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.900 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.160 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:48.160 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.103 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.103 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.676 00:23:49.676 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:49.676 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:49.676 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.937 { 00:23:49.937 "cntlid": 95, 00:23:49.937 "qid": 0, 00:23:49.937 "state": "enabled", 00:23:49.937 "thread": "nvmf_tgt_poll_group_000", 00:23:49.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:49.937 "listen_address": { 00:23:49.937 "trtype": "TCP", 00:23:49.937 "adrfam": "IPv4", 00:23:49.937 "traddr": "10.0.0.2", 00:23:49.937 "trsvcid": "4420" 00:23:49.937 }, 00:23:49.937 "peer_address": { 00:23:49.937 "trtype": "TCP", 00:23:49.937 "adrfam": "IPv4", 00:23:49.937 "traddr": "10.0.0.1", 00:23:49.937 "trsvcid": "57420" 00:23:49.937 }, 00:23:49.937 "auth": { 00:23:49.937 "state": "completed", 00:23:49.937 "digest": "sha384", 00:23:49.937 "dhgroup": "ffdhe8192" 00:23:49.937 } 00:23:49.937 } 00:23:49.937 ]' 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.937 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.198 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:50.198 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:50.770 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.032 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.293 00:23:51.293 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:51.293 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:51.293 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:51.554 { 00:23:51.554 "cntlid": 97, 00:23:51.554 "qid": 0, 00:23:51.554 "state": "enabled", 00:23:51.554 "thread": "nvmf_tgt_poll_group_000", 00:23:51.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:51.554 "listen_address": { 00:23:51.554 "trtype": "TCP", 00:23:51.554 "adrfam": "IPv4", 00:23:51.554 "traddr": "10.0.0.2", 00:23:51.554 "trsvcid": "4420" 00:23:51.554 }, 00:23:51.554 "peer_address": { 00:23:51.554 "trtype": "TCP", 00:23:51.554 "adrfam": "IPv4", 00:23:51.554 "traddr": "10.0.0.1", 00:23:51.554 "trsvcid": "41990" 00:23:51.554 }, 00:23:51.554 "auth": { 00:23:51.554 "state": "completed", 00:23:51.554 "digest": "sha512", 00:23:51.554 "dhgroup": "null" 00:23:51.554 } 00:23:51.554 } 00:23:51.554 ]' 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:51.554 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:51.814 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.814 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.814 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.814 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:51.814 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.757 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.758 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.019 00:23:53.019 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.019 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.019 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:53.280 { 00:23:53.280 "cntlid": 99, 00:23:53.280 "qid": 0, 00:23:53.280 "state": "enabled", 00:23:53.280 "thread": "nvmf_tgt_poll_group_000", 00:23:53.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:53.280 "listen_address": { 00:23:53.280 "trtype": "TCP", 00:23:53.280 "adrfam": "IPv4", 00:23:53.280 "traddr": "10.0.0.2", 00:23:53.280 "trsvcid": "4420" 00:23:53.280 }, 00:23:53.280 "peer_address": { 00:23:53.280 "trtype": "TCP", 00:23:53.280 "adrfam": "IPv4", 00:23:53.280 "traddr": "10.0.0.1", 00:23:53.280 "trsvcid": "42014" 00:23:53.280 }, 00:23:53.280 "auth": { 00:23:53.280 "state": "completed", 00:23:53.280 "digest": "sha512", 00:23:53.280 "dhgroup": "null" 00:23:53.280 } 00:23:53.280 } 00:23:53.280 ]' 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.280 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.542 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:53.542 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:23:54.113 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.113 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.113 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.113 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.375 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.376 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.635 00:23:54.635 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:54.635 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:54.635 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.895 { 00:23:54.895 "cntlid": 101, 00:23:54.895 "qid": 0, 00:23:54.895 "state": "enabled", 00:23:54.895 "thread": "nvmf_tgt_poll_group_000", 00:23:54.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:54.895 "listen_address": { 00:23:54.895 "trtype": "TCP", 00:23:54.895 "adrfam": "IPv4", 00:23:54.895 "traddr": "10.0.0.2", 00:23:54.895 "trsvcid": "4420" 00:23:54.895 }, 00:23:54.895 "peer_address": { 00:23:54.895 "trtype": "TCP", 00:23:54.895 "adrfam": "IPv4", 00:23:54.895 "traddr": "10.0.0.1", 00:23:54.895 "trsvcid": "42040" 00:23:54.895 }, 00:23:54.895 "auth": { 00:23:54.895 "state": "completed", 00:23:54.895 "digest": "sha512", 00:23:54.895 "dhgroup": "null" 00:23:54.895 } 00:23:54.895 } 00:23:54.895 ]' 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.895 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.157 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:55.157 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.100 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:56.100 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:56.361 00:23:56.361 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.361 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.361 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.621 { 00:23:56.621 "cntlid": 103, 00:23:56.621 "qid": 0, 00:23:56.621 "state": "enabled", 00:23:56.621 "thread": "nvmf_tgt_poll_group_000", 00:23:56.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:56.621 "listen_address": { 00:23:56.621 "trtype": "TCP", 00:23:56.621 "adrfam": "IPv4", 00:23:56.621 "traddr": "10.0.0.2", 00:23:56.621 "trsvcid": "4420" 00:23:56.621 }, 00:23:56.621 "peer_address": { 00:23:56.621 "trtype": "TCP", 00:23:56.621 "adrfam": "IPv4", 00:23:56.621 "traddr": "10.0.0.1", 00:23:56.621 "trsvcid": "42070" 00:23:56.621 }, 00:23:56.621 "auth": { 00:23:56.621 "state": "completed", 00:23:56.621 "digest": "sha512", 00:23:56.621 "dhgroup": "null" 00:23:56.621 } 00:23:56.621 } 00:23:56.621 ]' 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.621 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.882 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:56.882 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.832 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.093 00:23:58.093 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:58.093 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:58.093 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.353 { 00:23:58.353 "cntlid": 105, 00:23:58.353 "qid": 0, 00:23:58.353 "state": "enabled", 00:23:58.353 "thread": "nvmf_tgt_poll_group_000", 00:23:58.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:58.353 "listen_address": { 00:23:58.353 "trtype": "TCP", 00:23:58.353 "adrfam": "IPv4", 00:23:58.353 "traddr": "10.0.0.2", 00:23:58.353 "trsvcid": "4420" 00:23:58.353 }, 00:23:58.353 "peer_address": { 00:23:58.353 "trtype": "TCP", 00:23:58.353 "adrfam": "IPv4", 00:23:58.353 "traddr": "10.0.0.1", 00:23:58.353 "trsvcid": "42098" 00:23:58.353 }, 00:23:58.353 "auth": { 00:23:58.353 "state": "completed", 00:23:58.353 "digest": "sha512", 00:23:58.353 "dhgroup": "ffdhe2048" 00:23:58.353 } 00:23:58.353 } 00:23:58.353 ]' 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.353 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.615 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:58.615 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.557 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.818 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.818 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.079 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.079 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.079 { 00:24:00.079 "cntlid": 107, 00:24:00.079 "qid": 0, 00:24:00.080 "state": "enabled", 00:24:00.080 "thread": "nvmf_tgt_poll_group_000", 00:24:00.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:00.080 "listen_address": { 00:24:00.080 "trtype": "TCP", 00:24:00.080 "adrfam": "IPv4", 00:24:00.080 "traddr": "10.0.0.2", 00:24:00.080 "trsvcid": "4420" 00:24:00.080 }, 00:24:00.080 "peer_address": { 00:24:00.080 "trtype": "TCP", 00:24:00.080 "adrfam": "IPv4", 00:24:00.080 "traddr": "10.0.0.1", 00:24:00.080 "trsvcid": "42108" 00:24:00.080 }, 00:24:00.080 "auth": { 00:24:00.080 "state": "completed", 00:24:00.080 "digest": "sha512", 00:24:00.080 "dhgroup": "ffdhe2048" 00:24:00.080 } 00:24:00.080 } 00:24:00.080 ]' 00:24:00.080 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.080 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.080 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.080 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:00.080 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.080 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.080 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.080 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.341 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:00.341 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:00.912 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.912 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.912 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.912 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.173 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.173 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:01.173 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:01.173 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.173 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.434 00:24:01.434 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:01.434 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:01.434 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.694 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.694 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.694 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.694 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.694 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.694 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:01.694 { 00:24:01.694 "cntlid": 109, 00:24:01.694 "qid": 0, 00:24:01.694 "state": "enabled", 00:24:01.694 "thread": "nvmf_tgt_poll_group_000", 00:24:01.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:01.694 "listen_address": { 00:24:01.694 "trtype": "TCP", 00:24:01.694 "adrfam": "IPv4", 00:24:01.694 "traddr": "10.0.0.2", 00:24:01.694 "trsvcid": "4420" 00:24:01.694 }, 00:24:01.694 "peer_address": { 00:24:01.694 "trtype": "TCP", 00:24:01.694 "adrfam": "IPv4", 00:24:01.694 "traddr": "10.0.0.1", 00:24:01.694 "trsvcid": "42958" 00:24:01.694 }, 00:24:01.694 "auth": { 00:24:01.694 "state": "completed", 00:24:01.694 "digest": "sha512", 00:24:01.694 "dhgroup": "ffdhe2048" 00:24:01.694 } 00:24:01.694 } 00:24:01.695 ]' 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.695 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.956 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:01.956 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:02.897 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:03.158 00:24:03.158 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.158 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.158 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.418 { 00:24:03.418 "cntlid": 111, 00:24:03.418 "qid": 0, 00:24:03.418 "state": "enabled", 00:24:03.418 "thread": "nvmf_tgt_poll_group_000", 00:24:03.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:03.418 "listen_address": { 00:24:03.418 "trtype": "TCP", 00:24:03.418 "adrfam": "IPv4", 00:24:03.418 "traddr": "10.0.0.2", 00:24:03.418 "trsvcid": "4420" 00:24:03.418 }, 00:24:03.418 "peer_address": { 00:24:03.418 "trtype": "TCP", 00:24:03.418 "adrfam": "IPv4", 00:24:03.418 "traddr": "10.0.0.1", 00:24:03.418 "trsvcid": "42978" 00:24:03.418 }, 00:24:03.418 "auth": { 00:24:03.418 "state": "completed", 00:24:03.418 "digest": "sha512", 00:24:03.418 "dhgroup": "ffdhe2048" 00:24:03.418 } 00:24:03.418 } 00:24:03.418 ]' 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.418 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.678 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:03.678 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.617 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.877 00:24:04.877 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:04.877 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:04.877 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:05.137 { 00:24:05.137 "cntlid": 113, 00:24:05.137 "qid": 0, 00:24:05.137 "state": "enabled", 00:24:05.137 "thread": "nvmf_tgt_poll_group_000", 00:24:05.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:05.137 "listen_address": { 00:24:05.137 "trtype": "TCP", 00:24:05.137 "adrfam": "IPv4", 00:24:05.137 "traddr": "10.0.0.2", 00:24:05.137 "trsvcid": "4420" 00:24:05.137 }, 00:24:05.137 "peer_address": { 00:24:05.137 "trtype": "TCP", 00:24:05.137 "adrfam": "IPv4", 00:24:05.137 "traddr": "10.0.0.1", 00:24:05.137 "trsvcid": "43014" 00:24:05.137 }, 00:24:05.137 "auth": { 00:24:05.137 "state": "completed", 00:24:05.137 "digest": "sha512", 00:24:05.137 "dhgroup": "ffdhe3072" 00:24:05.137 } 00:24:05.137 } 00:24:05.137 ]' 00:24:05.137 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.137 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.397 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:05.397 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:05.967 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:05.967 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.227 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.487 00:24:06.487 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:06.487 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:06.487 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:06.748 { 00:24:06.748 "cntlid": 115, 00:24:06.748 "qid": 0, 00:24:06.748 "state": "enabled", 00:24:06.748 "thread": "nvmf_tgt_poll_group_000", 00:24:06.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:06.748 "listen_address": { 00:24:06.748 "trtype": "TCP", 00:24:06.748 "adrfam": "IPv4", 00:24:06.748 "traddr": "10.0.0.2", 00:24:06.748 "trsvcid": "4420" 00:24:06.748 }, 00:24:06.748 "peer_address": { 00:24:06.748 "trtype": "TCP", 00:24:06.748 "adrfam": "IPv4", 00:24:06.748 "traddr": "10.0.0.1", 00:24:06.748 "trsvcid": "43040" 00:24:06.748 }, 00:24:06.748 "auth": { 00:24:06.748 "state": "completed", 00:24:06.748 "digest": "sha512", 00:24:06.748 "dhgroup": "ffdhe3072" 00:24:06.748 } 00:24:06.748 } 00:24:06.748 ]' 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.748 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.008 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:07.008 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:07.949 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.950 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.210 00:24:08.210 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:08.210 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:08.210 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:08.471 { 00:24:08.471 "cntlid": 117, 00:24:08.471 "qid": 0, 00:24:08.471 "state": "enabled", 00:24:08.471 "thread": "nvmf_tgt_poll_group_000", 00:24:08.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:08.471 "listen_address": { 00:24:08.471 "trtype": "TCP", 00:24:08.471 "adrfam": "IPv4", 00:24:08.471 "traddr": "10.0.0.2", 00:24:08.471 "trsvcid": "4420" 00:24:08.471 }, 00:24:08.471 "peer_address": { 00:24:08.471 "trtype": "TCP", 00:24:08.471 "adrfam": "IPv4", 00:24:08.471 "traddr": "10.0.0.1", 00:24:08.471 "trsvcid": "43068" 00:24:08.471 }, 00:24:08.471 "auth": { 00:24:08.471 "state": "completed", 00:24:08.471 "digest": "sha512", 00:24:08.471 "dhgroup": "ffdhe3072" 00:24:08.471 } 00:24:08.471 } 00:24:08.471 ]' 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.471 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.731 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:08.731 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:09.672 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:09.985 00:24:09.985 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:09.985 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:09.985 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:10.257 { 00:24:10.257 "cntlid": 119, 00:24:10.257 "qid": 0, 00:24:10.257 "state": "enabled", 00:24:10.257 "thread": "nvmf_tgt_poll_group_000", 00:24:10.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:10.257 "listen_address": { 00:24:10.257 "trtype": "TCP", 00:24:10.257 "adrfam": "IPv4", 00:24:10.257 "traddr": "10.0.0.2", 00:24:10.257 "trsvcid": "4420" 00:24:10.257 }, 00:24:10.257 "peer_address": { 00:24:10.257 "trtype": "TCP", 00:24:10.257 "adrfam": "IPv4", 00:24:10.257 "traddr": "10.0.0.1", 00:24:10.257 "trsvcid": "43108" 00:24:10.257 }, 00:24:10.257 "auth": { 00:24:10.257 "state": "completed", 00:24:10.257 "digest": "sha512", 00:24:10.257 "dhgroup": "ffdhe3072" 00:24:10.257 } 00:24:10.257 } 00:24:10.257 ]' 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.257 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.517 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:10.517 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:11.087 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.348 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.608 00:24:11.608 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:11.608 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:11.608 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:11.869 { 00:24:11.869 "cntlid": 121, 00:24:11.869 "qid": 0, 00:24:11.869 "state": "enabled", 00:24:11.869 "thread": "nvmf_tgt_poll_group_000", 00:24:11.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:11.869 "listen_address": { 00:24:11.869 "trtype": "TCP", 00:24:11.869 "adrfam": "IPv4", 00:24:11.869 "traddr": "10.0.0.2", 00:24:11.869 "trsvcid": "4420" 00:24:11.869 }, 00:24:11.869 "peer_address": { 00:24:11.869 "trtype": "TCP", 00:24:11.869 "adrfam": "IPv4", 00:24:11.869 "traddr": "10.0.0.1", 00:24:11.869 "trsvcid": "43154" 00:24:11.869 }, 00:24:11.869 "auth": { 00:24:11.869 "state": "completed", 00:24:11.869 "digest": "sha512", 00:24:11.869 "dhgroup": "ffdhe4096" 00:24:11.869 } 00:24:11.869 } 00:24:11.869 ]' 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.869 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.131 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:12.131 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.073 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.073 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.334 00:24:13.334 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:13.334 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:13.334 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:13.595 { 00:24:13.595 "cntlid": 123, 00:24:13.595 "qid": 0, 00:24:13.595 "state": "enabled", 00:24:13.595 "thread": "nvmf_tgt_poll_group_000", 00:24:13.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:13.595 "listen_address": { 00:24:13.595 "trtype": "TCP", 00:24:13.595 "adrfam": "IPv4", 00:24:13.595 "traddr": "10.0.0.2", 00:24:13.595 "trsvcid": "4420" 00:24:13.595 }, 00:24:13.595 "peer_address": { 00:24:13.595 "trtype": "TCP", 00:24:13.595 "adrfam": "IPv4", 00:24:13.595 "traddr": "10.0.0.1", 00:24:13.595 "trsvcid": "43190" 00:24:13.595 }, 00:24:13.595 "auth": { 00:24:13.595 "state": "completed", 00:24:13.595 "digest": "sha512", 00:24:13.595 "dhgroup": "ffdhe4096" 00:24:13.595 } 00:24:13.595 } 00:24:13.595 ]' 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.595 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.856 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:13.856 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:14.799 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.799 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.799 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.799 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.799 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.800 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.060 00:24:15.060 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:15.060 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:15.060 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:15.321 { 00:24:15.321 "cntlid": 125, 00:24:15.321 "qid": 0, 00:24:15.321 "state": "enabled", 00:24:15.321 "thread": "nvmf_tgt_poll_group_000", 00:24:15.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:15.321 "listen_address": { 00:24:15.321 "trtype": "TCP", 00:24:15.321 "adrfam": "IPv4", 00:24:15.321 "traddr": "10.0.0.2", 00:24:15.321 "trsvcid": "4420" 00:24:15.321 }, 00:24:15.321 "peer_address": { 00:24:15.321 "trtype": "TCP", 00:24:15.321 "adrfam": "IPv4", 00:24:15.321 "traddr": "10.0.0.1", 00:24:15.321 "trsvcid": "43220" 00:24:15.321 }, 00:24:15.321 "auth": { 00:24:15.321 "state": "completed", 00:24:15.321 "digest": "sha512", 00:24:15.321 "dhgroup": "ffdhe4096" 00:24:15.321 } 00:24:15.321 } 00:24:15.321 ]' 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.321 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:15.581 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:15.581 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:16.522 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:16.782 00:24:16.782 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:16.782 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:16.782 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.041 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.041 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:17.042 { 00:24:17.042 "cntlid": 127, 00:24:17.042 "qid": 0, 00:24:17.042 "state": "enabled", 00:24:17.042 "thread": "nvmf_tgt_poll_group_000", 00:24:17.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:17.042 "listen_address": { 00:24:17.042 "trtype": "TCP", 00:24:17.042 "adrfam": "IPv4", 00:24:17.042 "traddr": "10.0.0.2", 00:24:17.042 "trsvcid": "4420" 00:24:17.042 }, 00:24:17.042 "peer_address": { 00:24:17.042 "trtype": "TCP", 00:24:17.042 "adrfam": "IPv4", 00:24:17.042 "traddr": "10.0.0.1", 00:24:17.042 "trsvcid": "43244" 00:24:17.042 }, 00:24:17.042 "auth": { 00:24:17.042 "state": "completed", 00:24:17.042 "digest": "sha512", 00:24:17.042 "dhgroup": "ffdhe4096" 00:24:17.042 } 00:24:17.042 } 00:24:17.042 ]' 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:17.042 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:17.042 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.042 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.042 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.302 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:17.302 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.245 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.245 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.506 00:24:18.506 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:18.506 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:18.506 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:18.767 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.767 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:18.767 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.767 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.767 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.767 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:18.767 { 00:24:18.767 "cntlid": 129, 00:24:18.767 "qid": 0, 00:24:18.767 "state": "enabled", 00:24:18.767 "thread": "nvmf_tgt_poll_group_000", 00:24:18.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:18.767 "listen_address": { 00:24:18.767 "trtype": "TCP", 00:24:18.767 "adrfam": "IPv4", 00:24:18.767 "traddr": "10.0.0.2", 00:24:18.767 "trsvcid": "4420" 00:24:18.767 }, 00:24:18.767 "peer_address": { 00:24:18.767 "trtype": "TCP", 00:24:18.767 "adrfam": "IPv4", 00:24:18.767 "traddr": "10.0.0.1", 00:24:18.767 "trsvcid": "43266" 00:24:18.767 }, 00:24:18.767 "auth": { 00:24:18.767 "state": "completed", 00:24:18.767 "digest": "sha512", 00:24:18.767 "dhgroup": "ffdhe6144" 00:24:18.767 } 00:24:18.767 } 00:24:18.767 ]' 00:24:18.768 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:18.768 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:18.768 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:18.768 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:18.768 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:19.029 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.029 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.029 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.029 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:19.029 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:19.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.970 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.540 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:20.540 { 00:24:20.540 "cntlid": 131, 00:24:20.540 "qid": 0, 00:24:20.540 "state": "enabled", 00:24:20.540 "thread": "nvmf_tgt_poll_group_000", 00:24:20.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:20.540 "listen_address": { 00:24:20.540 "trtype": "TCP", 00:24:20.540 "adrfam": "IPv4", 00:24:20.540 "traddr": "10.0.0.2", 00:24:20.540 "trsvcid": "4420" 00:24:20.540 }, 00:24:20.540 "peer_address": { 00:24:20.540 "trtype": "TCP", 00:24:20.540 "adrfam": "IPv4", 00:24:20.540 "traddr": "10.0.0.1", 00:24:20.540 "trsvcid": "43298" 00:24:20.540 }, 00:24:20.540 "auth": { 00:24:20.540 "state": "completed", 00:24:20.540 "digest": "sha512", 00:24:20.540 "dhgroup": "ffdhe6144" 00:24:20.540 } 00:24:20.540 } 00:24:20.540 ]' 00:24:20.540 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.800 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.061 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:21.061 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.632 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.892 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.893 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.893 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.893 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.893 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.893 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.153 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:22.414 { 00:24:22.414 "cntlid": 133, 00:24:22.414 "qid": 0, 00:24:22.414 "state": "enabled", 00:24:22.414 "thread": "nvmf_tgt_poll_group_000", 00:24:22.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:22.414 "listen_address": { 00:24:22.414 "trtype": "TCP", 00:24:22.414 "adrfam": "IPv4", 00:24:22.414 "traddr": "10.0.0.2", 00:24:22.414 "trsvcid": "4420" 00:24:22.414 }, 00:24:22.414 "peer_address": { 00:24:22.414 "trtype": "TCP", 00:24:22.414 "adrfam": "IPv4", 00:24:22.414 "traddr": "10.0.0.1", 00:24:22.414 "trsvcid": "50682" 00:24:22.414 }, 00:24:22.414 "auth": { 00:24:22.414 "state": "completed", 00:24:22.414 "digest": "sha512", 00:24:22.414 "dhgroup": "ffdhe6144" 00:24:22.414 } 00:24:22.414 } 00:24:22.414 ]' 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:22.414 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:22.681 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.629 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.630 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.630 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:23.630 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:23.630 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:24.201 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:24.201 { 00:24:24.201 "cntlid": 135, 00:24:24.201 "qid": 0, 00:24:24.201 "state": "enabled", 00:24:24.201 "thread": "nvmf_tgt_poll_group_000", 00:24:24.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:24.201 "listen_address": { 00:24:24.201 "trtype": "TCP", 00:24:24.201 "adrfam": "IPv4", 00:24:24.201 "traddr": "10.0.0.2", 00:24:24.201 "trsvcid": "4420" 00:24:24.201 }, 00:24:24.201 "peer_address": { 00:24:24.201 "trtype": "TCP", 00:24:24.201 "adrfam": "IPv4", 00:24:24.201 "traddr": "10.0.0.1", 00:24:24.201 "trsvcid": "50708" 00:24:24.201 }, 00:24:24.201 "auth": { 00:24:24.201 "state": "completed", 00:24:24.201 "digest": "sha512", 00:24:24.201 "dhgroup": "ffdhe6144" 00:24:24.201 } 00:24:24.201 } 00:24:24.201 ]' 00:24:24.201 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.461 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.722 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:24.722 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.293 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.553 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.123 00:24:26.123 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:26.123 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:26.123 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:26.383 { 00:24:26.383 "cntlid": 137, 00:24:26.383 "qid": 0, 00:24:26.383 "state": "enabled", 00:24:26.383 "thread": "nvmf_tgt_poll_group_000", 00:24:26.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:26.383 "listen_address": { 00:24:26.383 "trtype": "TCP", 00:24:26.383 "adrfam": "IPv4", 00:24:26.383 "traddr": "10.0.0.2", 00:24:26.383 "trsvcid": "4420" 00:24:26.383 }, 00:24:26.383 "peer_address": { 00:24:26.383 "trtype": "TCP", 00:24:26.383 "adrfam": "IPv4", 00:24:26.383 "traddr": "10.0.0.1", 00:24:26.383 "trsvcid": "50732" 00:24:26.383 }, 00:24:26.383 "auth": { 00:24:26.383 "state": "completed", 00:24:26.383 "digest": "sha512", 00:24:26.383 "dhgroup": "ffdhe8192" 00:24:26.383 } 00:24:26.383 } 00:24:26.383 ]' 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.383 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.643 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:26.643 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:27.212 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.471 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.471 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.471 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.471 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.471 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:27.471 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.472 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.039 00:24:28.039 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:28.039 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:28.039 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:28.299 { 00:24:28.299 "cntlid": 139, 00:24:28.299 "qid": 0, 00:24:28.299 "state": "enabled", 00:24:28.299 "thread": "nvmf_tgt_poll_group_000", 00:24:28.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:28.299 "listen_address": { 00:24:28.299 "trtype": "TCP", 00:24:28.299 "adrfam": "IPv4", 00:24:28.299 "traddr": "10.0.0.2", 00:24:28.299 "trsvcid": "4420" 00:24:28.299 }, 00:24:28.299 "peer_address": { 00:24:28.299 "trtype": "TCP", 00:24:28.299 "adrfam": "IPv4", 00:24:28.299 "traddr": "10.0.0.1", 00:24:28.299 "trsvcid": "50758" 00:24:28.299 }, 00:24:28.299 "auth": { 00:24:28.299 "state": "completed", 00:24:28.299 "digest": "sha512", 00:24:28.299 "dhgroup": "ffdhe8192" 00:24:28.299 } 00:24:28.299 } 00:24:28.299 ]' 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.299 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.560 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:28.560 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: --dhchap-ctrl-secret DHHC-1:02:N2RhZDRlOWZlYzFlOWIzYTgwNDJkNWE5NTRlZWI4MDdjMDMzYWU1MTE2NjM1YTNmsiNGkQ==: 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:29.499 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.500 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.069 00:24:30.069 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:30.069 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:30.069 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:30.329 { 00:24:30.329 "cntlid": 141, 00:24:30.329 "qid": 0, 00:24:30.329 "state": "enabled", 00:24:30.329 "thread": "nvmf_tgt_poll_group_000", 00:24:30.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:30.329 "listen_address": { 00:24:30.329 "trtype": "TCP", 00:24:30.329 "adrfam": "IPv4", 00:24:30.329 "traddr": "10.0.0.2", 00:24:30.329 "trsvcid": "4420" 00:24:30.329 }, 00:24:30.329 "peer_address": { 00:24:30.329 "trtype": "TCP", 00:24:30.329 "adrfam": "IPv4", 00:24:30.329 "traddr": "10.0.0.1", 00:24:30.329 "trsvcid": "50776" 00:24:30.329 }, 00:24:30.329 "auth": { 00:24:30.329 "state": "completed", 00:24:30.329 "digest": "sha512", 00:24:30.329 "dhgroup": "ffdhe8192" 00:24:30.329 } 00:24:30.329 } 00:24:30.329 ]' 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.329 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.589 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:30.589 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:01:MDQ5NmEzZGE3YjU4YzkyNDY4OGQ2NWIxNTljMDFmZWYVOlo8: 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:31.530 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:32.099 00:24:32.099 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:32.099 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.099 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:32.099 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.099 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.099 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.099 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:32.359 { 00:24:32.359 "cntlid": 143, 00:24:32.359 "qid": 0, 00:24:32.359 "state": "enabled", 00:24:32.359 "thread": "nvmf_tgt_poll_group_000", 00:24:32.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:32.359 "listen_address": { 00:24:32.359 "trtype": "TCP", 00:24:32.359 "adrfam": "IPv4", 00:24:32.359 "traddr": "10.0.0.2", 00:24:32.359 "trsvcid": "4420" 00:24:32.359 }, 00:24:32.359 "peer_address": { 00:24:32.359 "trtype": "TCP", 00:24:32.359 "adrfam": "IPv4", 00:24:32.359 "traddr": "10.0.0.1", 00:24:32.359 "trsvcid": "38184" 00:24:32.359 }, 00:24:32.359 "auth": { 00:24:32.359 "state": "completed", 00:24:32.359 "digest": "sha512", 00:24:32.359 "dhgroup": "ffdhe8192" 00:24:32.359 } 00:24:32.359 } 00:24:32.359 ]' 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.359 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.619 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:32.619 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:33.188 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.188 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:33.188 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.188 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.188 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.448 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.016 00:24:34.016 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:34.016 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:34.016 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:34.276 { 00:24:34.276 "cntlid": 145, 00:24:34.276 "qid": 0, 00:24:34.276 "state": "enabled", 00:24:34.276 "thread": "nvmf_tgt_poll_group_000", 00:24:34.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:34.276 "listen_address": { 00:24:34.276 "trtype": "TCP", 00:24:34.276 "adrfam": "IPv4", 00:24:34.276 "traddr": "10.0.0.2", 00:24:34.276 "trsvcid": "4420" 00:24:34.276 }, 00:24:34.276 "peer_address": { 00:24:34.276 "trtype": "TCP", 00:24:34.276 "adrfam": "IPv4", 00:24:34.276 "traddr": "10.0.0.1", 00:24:34.276 "trsvcid": "38198" 00:24:34.276 }, 00:24:34.276 "auth": { 00:24:34.276 "state": "completed", 00:24:34.276 "digest": "sha512", 00:24:34.276 "dhgroup": "ffdhe8192" 00:24:34.276 } 00:24:34.276 } 00:24:34.276 ]' 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.276 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.537 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:34.537 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YmRhODllNDI1YzgwMTcxZGQ2ZDM5MDY4OWZhYjhlY2RhZjY4ZjNiZTVmZjE1NjhhRoJVwg==: --dhchap-ctrl-secret DHHC-1:03:M2YxNzgyNjcwMzg5N2E3MGY3NTg2NzVkNGJmM2I3NTkxNmFjNzU1ZTVkYmFlMDQ5YjZmNmRjYjgzNDBiZTExZnz5PYY=: 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:35.475 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:35.476 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:35.735 request: 00:24:35.735 { 00:24:35.735 "name": "nvme0", 00:24:35.735 "trtype": "tcp", 00:24:35.735 "traddr": "10.0.0.2", 00:24:35.735 "adrfam": "ipv4", 00:24:35.735 "trsvcid": "4420", 00:24:35.735 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:35.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:35.735 "prchk_reftag": false, 00:24:35.735 "prchk_guard": false, 00:24:35.735 "hdgst": false, 00:24:35.735 "ddgst": false, 00:24:35.735 "dhchap_key": "key2", 00:24:35.735 "allow_unrecognized_csi": false, 00:24:35.735 "method": "bdev_nvme_attach_controller", 00:24:35.735 "req_id": 1 00:24:35.735 } 00:24:35.735 Got JSON-RPC error response 00:24:35.735 response: 00:24:35.735 { 00:24:35.735 "code": -5, 00:24:35.735 "message": "Input/output error" 00:24:35.735 } 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:35.736 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.306 request: 00:24:36.306 { 00:24:36.306 "name": "nvme0", 00:24:36.306 "trtype": "tcp", 00:24:36.306 "traddr": "10.0.0.2", 00:24:36.306 "adrfam": "ipv4", 00:24:36.306 "trsvcid": "4420", 00:24:36.306 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:36.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:36.306 "prchk_reftag": false, 00:24:36.306 "prchk_guard": false, 00:24:36.306 "hdgst": false, 00:24:36.306 "ddgst": false, 00:24:36.306 "dhchap_key": "key1", 00:24:36.306 "dhchap_ctrlr_key": "ckey2", 00:24:36.306 "allow_unrecognized_csi": false, 00:24:36.306 "method": "bdev_nvme_attach_controller", 00:24:36.306 "req_id": 1 00:24:36.306 } 00:24:36.306 Got JSON-RPC error response 00:24:36.306 response: 00:24:36.306 { 00:24:36.306 "code": -5, 00:24:36.306 "message": "Input/output error" 00:24:36.306 } 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.306 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.877 request: 00:24:36.877 { 00:24:36.877 "name": "nvme0", 00:24:36.877 "trtype": "tcp", 00:24:36.877 "traddr": "10.0.0.2", 00:24:36.877 "adrfam": "ipv4", 00:24:36.877 "trsvcid": "4420", 00:24:36.877 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:36.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:36.877 "prchk_reftag": false, 00:24:36.877 "prchk_guard": false, 00:24:36.877 "hdgst": false, 00:24:36.877 "ddgst": false, 00:24:36.877 "dhchap_key": "key1", 00:24:36.877 "dhchap_ctrlr_key": "ckey1", 00:24:36.877 "allow_unrecognized_csi": false, 00:24:36.877 "method": "bdev_nvme_attach_controller", 00:24:36.877 "req_id": 1 00:24:36.877 } 00:24:36.877 Got JSON-RPC error response 00:24:36.877 response: 00:24:36.877 { 00:24:36.877 "code": -5, 00:24:36.877 "message": "Input/output error" 00:24:36.877 } 00:24:36.877 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:36.877 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:36.877 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:36.877 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3147644 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3147644 ']' 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3147644 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3147644 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3147644' 00:24:36.878 killing process with pid 3147644 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3147644 00:24:36.878 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3147644 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=3175483 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 3175483 00:24:37.138 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:37.138 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3175483 ']' 00:24:37.138 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.138 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:37.138 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.138 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:37.138 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3175483 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3175483 ']' 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.079 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.079 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.079 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:24:38.079 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:38.079 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.079 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.079 null0 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ew5 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.jGp ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jGp 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nhW 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.w1x ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w1x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oPh 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.2pW ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2pW 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:38.340 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wei 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.341 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:39.283 nvme0n1 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:39.283 { 00:24:39.283 "cntlid": 1, 00:24:39.283 "qid": 0, 00:24:39.283 "state": "enabled", 00:24:39.283 "thread": "nvmf_tgt_poll_group_000", 00:24:39.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:39.283 "listen_address": { 00:24:39.283 "trtype": "TCP", 00:24:39.283 "adrfam": "IPv4", 00:24:39.283 "traddr": "10.0.0.2", 00:24:39.283 "trsvcid": "4420" 00:24:39.283 }, 00:24:39.283 "peer_address": { 00:24:39.283 "trtype": "TCP", 00:24:39.283 "adrfam": "IPv4", 00:24:39.283 "traddr": "10.0.0.1", 00:24:39.283 "trsvcid": "38264" 00:24:39.283 }, 00:24:39.283 "auth": { 00:24:39.283 "state": "completed", 00:24:39.283 "digest": "sha512", 00:24:39.283 "dhgroup": "ffdhe8192" 00:24:39.283 } 00:24:39.283 } 00:24:39.283 ]' 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:39.283 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:39.544 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:40.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:40.485 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:40.746 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:40.746 request: 00:24:40.746 { 00:24:40.746 "name": "nvme0", 00:24:40.746 "trtype": "tcp", 00:24:40.746 "traddr": "10.0.0.2", 00:24:40.746 "adrfam": "ipv4", 00:24:40.746 "trsvcid": "4420", 00:24:40.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:40.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:40.746 "prchk_reftag": false, 00:24:40.746 "prchk_guard": false, 00:24:40.747 "hdgst": false, 00:24:40.747 "ddgst": false, 00:24:40.747 "dhchap_key": "key3", 00:24:40.747 "allow_unrecognized_csi": false, 00:24:40.747 "method": "bdev_nvme_attach_controller", 00:24:40.747 "req_id": 1 00:24:40.747 } 00:24:40.747 Got JSON-RPC error response 00:24:40.747 response: 00:24:40.747 { 00:24:40.747 "code": -5, 00:24:40.747 "message": "Input/output error" 00:24:40.747 } 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:40.747 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.008 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.008 request: 00:24:41.008 { 00:24:41.008 "name": "nvme0", 00:24:41.008 "trtype": "tcp", 00:24:41.008 "traddr": "10.0.0.2", 00:24:41.008 "adrfam": "ipv4", 00:24:41.008 "trsvcid": "4420", 00:24:41.008 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:41.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:41.008 "prchk_reftag": false, 00:24:41.008 "prchk_guard": false, 00:24:41.008 "hdgst": false, 00:24:41.008 "ddgst": false, 00:24:41.008 "dhchap_key": "key3", 00:24:41.008 "allow_unrecognized_csi": false, 00:24:41.008 "method": "bdev_nvme_attach_controller", 00:24:41.008 "req_id": 1 00:24:41.008 } 00:24:41.008 Got JSON-RPC error response 00:24:41.008 response: 00:24:41.008 { 00:24:41.008 "code": -5, 00:24:41.008 "message": "Input/output error" 00:24:41.008 } 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:41.269 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:41.528 request: 00:24:41.528 { 00:24:41.528 "name": "nvme0", 00:24:41.528 "trtype": "tcp", 00:24:41.528 "traddr": "10.0.0.2", 00:24:41.528 "adrfam": "ipv4", 00:24:41.528 "trsvcid": "4420", 00:24:41.528 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:41.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:41.528 "prchk_reftag": false, 00:24:41.528 "prchk_guard": false, 00:24:41.528 "hdgst": false, 00:24:41.528 "ddgst": false, 00:24:41.529 "dhchap_key": "key0", 00:24:41.529 "dhchap_ctrlr_key": "key1", 00:24:41.529 "allow_unrecognized_csi": false, 00:24:41.529 "method": "bdev_nvme_attach_controller", 00:24:41.529 "req_id": 1 00:24:41.529 } 00:24:41.529 Got JSON-RPC error response 00:24:41.529 response: 00:24:41.529 { 00:24:41.529 "code": -5, 00:24:41.529 "message": "Input/output error" 00:24:41.529 } 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:41.789 nvme0n1 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:41.789 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.049 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.049 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.049 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:42.309 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:43.248 nvme0n1 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:43.248 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.508 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.508 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:43.508 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: --dhchap-ctrl-secret DHHC-1:03:YTY5Zjk1M2M1YzUxNDJlYWNkYTZkZWZmZjM4YWZiY2MwNGU4NTY5MjdlOTljOTY0YWNlNGZhYzczN2ExYWJhZSLiyNg=: 00:24:44.448 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:44.448 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:44.448 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:44.448 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:44.448 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:44.448 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:44.449 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:45.019 request: 00:24:45.019 { 00:24:45.019 "name": "nvme0", 00:24:45.019 "trtype": "tcp", 00:24:45.019 "traddr": "10.0.0.2", 00:24:45.019 "adrfam": "ipv4", 00:24:45.019 "trsvcid": "4420", 00:24:45.019 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:45.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:45.019 "prchk_reftag": false, 00:24:45.019 "prchk_guard": false, 00:24:45.019 "hdgst": false, 00:24:45.019 "ddgst": false, 00:24:45.019 "dhchap_key": "key1", 00:24:45.019 "allow_unrecognized_csi": false, 00:24:45.019 "method": "bdev_nvme_attach_controller", 00:24:45.019 "req_id": 1 00:24:45.019 } 00:24:45.019 Got JSON-RPC error response 00:24:45.019 response: 00:24:45.019 { 00:24:45.019 "code": -5, 00:24:45.019 "message": "Input/output error" 00:24:45.019 } 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:45.019 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:45.589 nvme0n1 00:24:45.589 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:45.589 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:45.589 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.850 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.850 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.850 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:46.111 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:46.371 nvme0n1 00:24:46.371 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:46.371 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:46.371 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.371 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.371 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.371 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: '' 2s 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: ]] 00:24:46.631 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NmFmYTJlNmU2MzU5MWIzZGZjZDE0ZWJmMGE5ZWY4NmEyW7EE: 00:24:46.632 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:46.632 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:46.632 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: 2s 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: ]] 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTBmOGY5MGRlMDI1Y2NjMWNlYWI2N2UwMDI3Mzg2MWM4NjI5NjQwMjMwZGI1YmZiMuJeGQ==: 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:49.174 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:51.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:51.093 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:51.705 nvme0n1 00:24:51.705 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:51.705 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.705 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.705 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.705 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:51.705 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:52.320 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:52.581 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:52.581 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:52.581 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:52.841 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:52.842 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:52.842 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:52.842 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:53.102 request: 00:24:53.102 { 00:24:53.102 "name": "nvme0", 00:24:53.102 "dhchap_key": "key1", 00:24:53.102 "dhchap_ctrlr_key": "key3", 00:24:53.102 "method": "bdev_nvme_set_keys", 00:24:53.102 "req_id": 1 00:24:53.102 } 00:24:53.102 Got JSON-RPC error response 00:24:53.102 response: 00:24:53.102 { 00:24:53.102 "code": -13, 00:24:53.102 "message": "Permission denied" 00:24:53.102 } 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:53.363 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:54.304 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:54.304 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:54.304 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:54.565 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:55.507 nvme0n1 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:55.507 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:56.078 request: 00:24:56.078 { 00:24:56.078 "name": "nvme0", 00:24:56.078 "dhchap_key": "key2", 00:24:56.078 "dhchap_ctrlr_key": "key0", 00:24:56.078 "method": "bdev_nvme_set_keys", 00:24:56.078 "req_id": 1 00:24:56.078 } 00:24:56.078 Got JSON-RPC error response 00:24:56.078 response: 00:24:56.078 { 00:24:56.078 "code": -13, 00:24:56.078 "message": "Permission denied" 00:24:56.078 } 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:56.078 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.078 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:56.078 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3147730 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3147730 ']' 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3147730 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3147730 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3147730' 00:24:57.462 killing process with pid 3147730 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3147730 00:24:57.462 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3147730 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:57.722 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:57.722 rmmod nvme_tcp 00:24:57.722 rmmod nvme_fabrics 00:24:57.722 rmmod nvme_keyring 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 3175483 ']' 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 3175483 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3175483 ']' 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3175483 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3175483 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3175483' 00:24:57.723 killing process with pid 3175483 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3175483 00:24:57.723 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3175483 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:57.984 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ew5 /tmp/spdk.key-sha256.nhW /tmp/spdk.key-sha384.oPh /tmp/spdk.key-sha512.wei /tmp/spdk.key-sha512.jGp /tmp/spdk.key-sha384.w1x /tmp/spdk.key-sha256.2pW '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:59.897 00:24:59.897 real 2m44.305s 00:24:59.897 user 6m6.776s 00:24:59.897 sys 0m24.466s 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:59.897 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.897 ************************************ 00:24:59.897 END TEST nvmf_auth_target 00:24:59.897 ************************************ 00:24:59.898 16:49:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:59.898 16:49:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:59.898 16:49:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:59.898 16:49:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:59.898 16:49:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.159 ************************************ 00:25:00.159 START TEST nvmf_bdevio_no_huge 00:25:00.159 ************************************ 00:25:00.159 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:00.159 * Looking for test storage... 00:25:00.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.159 --rc geninfo_all_blocks=1 00:25:00.159 --rc geninfo_unexecuted_blocks=1 00:25:00.159 00:25:00.159 ' 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.159 --rc geninfo_all_blocks=1 00:25:00.159 --rc geninfo_unexecuted_blocks=1 00:25:00.159 00:25:00.159 ' 00:25:00.159 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:00.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.159 --rc genhtml_branch_coverage=1 00:25:00.159 --rc genhtml_function_coverage=1 00:25:00.159 --rc genhtml_legend=1 00:25:00.160 --rc geninfo_all_blocks=1 00:25:00.160 --rc geninfo_unexecuted_blocks=1 00:25:00.160 00:25:00.160 ' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:00.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.160 --rc genhtml_branch_coverage=1 00:25:00.160 --rc genhtml_function_coverage=1 00:25:00.160 --rc genhtml_legend=1 00:25:00.160 --rc geninfo_all_blocks=1 00:25:00.160 --rc geninfo_unexecuted_blocks=1 00:25:00.160 00:25:00.160 ' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:00.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:25:00.160 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.304 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:08.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:08.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:08.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:08.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@247 -- # create_target_ns 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:25:08.305 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:08.306 10.0.0.1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:08.306 10.0.0.2 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:08.306 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:08.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.618 ms 00:25:08.307 00:25:08.307 --- 10.0.0.1 ping statistics --- 00:25:08.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.307 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:08.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:25:08.307 00:25:08.307 --- 10.0.0.2 ping statistics --- 00:25:08.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.307 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:08.307 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:08.308 ' 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=3183836 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 3183836 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3183836 ']' 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:08.308 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.308 [2024-11-05 16:49:14.673943] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:08.308 [2024-11-05 16:49:14.674052] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:08.308 [2024-11-05 16:49:14.783062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.308 [2024-11-05 16:49:14.842704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.308 [2024-11-05 16:49:14.842762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.308 [2024-11-05 16:49:14.842771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.308 [2024-11-05 16:49:14.842778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.308 [2024-11-05 16:49:14.842785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.308 [2024-11-05 16:49:14.844255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:08.308 [2024-11-05 16:49:14.844410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:08.309 [2024-11-05 16:49:14.844592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:08.309 [2024-11-05 16:49:14.844692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 [2024-11-05 16:49:15.541295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 Malloc0 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 [2024-11-05 16:49:15.595252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:08.571 { 00:25:08.571 "params": { 00:25:08.571 "name": "Nvme$subsystem", 00:25:08.571 "trtype": "$TEST_TRANSPORT", 00:25:08.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.571 "adrfam": "ipv4", 00:25:08.571 "trsvcid": "$NVMF_PORT", 00:25:08.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.571 "hdgst": ${hdgst:-false}, 00:25:08.571 "ddgst": ${ddgst:-false} 00:25:08.571 }, 00:25:08.571 "method": "bdev_nvme_attach_controller" 00:25:08.571 } 00:25:08.571 EOF 00:25:08.571 )") 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:25:08.571 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:08.571 "params": { 00:25:08.571 "name": "Nvme1", 00:25:08.571 "trtype": "tcp", 00:25:08.571 "traddr": "10.0.0.2", 00:25:08.571 "adrfam": "ipv4", 00:25:08.571 "trsvcid": "4420", 00:25:08.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.571 "hdgst": false, 00:25:08.571 "ddgst": false 00:25:08.571 }, 00:25:08.571 "method": "bdev_nvme_attach_controller" 00:25:08.571 }' 00:25:08.833 [2024-11-05 16:49:15.653560] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:08.833 [2024-11-05 16:49:15.653638] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3183925 ] 00:25:08.833 [2024-11-05 16:49:15.735329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:08.833 [2024-11-05 16:49:15.790877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.833 [2024-11-05 16:49:15.791116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.833 [2024-11-05 16:49:15.791120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.405 I/O targets: 00:25:09.405 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:09.405 00:25:09.405 00:25:09.405 CUnit - A unit testing framework for C - Version 2.1-3 00:25:09.405 http://cunit.sourceforge.net/ 00:25:09.405 00:25:09.405 00:25:09.405 Suite: bdevio tests on: Nvme1n1 00:25:09.405 Test: blockdev write read block ...passed 00:25:09.405 Test: blockdev write zeroes read block ...passed 00:25:09.405 Test: blockdev write zeroes read no split ...passed 00:25:09.405 Test: blockdev write zeroes read split ...passed 00:25:09.405 Test: blockdev write zeroes read split partial ...passed 00:25:09.405 Test: blockdev reset ...[2024-11-05 16:49:16.295765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:09.405 [2024-11-05 16:49:16.295835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5800 (9): Bad file descriptor 00:25:09.405 [2024-11-05 16:49:16.354838] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:25:09.405 passed 00:25:09.405 Test: blockdev write read 8 blocks ...passed 00:25:09.405 Test: blockdev write read size > 128k ...passed 00:25:09.405 Test: blockdev write read invalid size ...passed 00:25:09.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:09.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:09.405 Test: blockdev write read max offset ...passed 00:25:09.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:09.666 Test: blockdev writev readv 8 blocks ...passed 00:25:09.666 Test: blockdev writev readv 30 x 1block ...passed 00:25:09.667 Test: blockdev writev readv block ...passed 00:25:09.667 Test: blockdev writev readv size > 128k ...passed 00:25:09.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:09.667 Test: blockdev comparev and writev ...[2024-11-05 16:49:16.579634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.579660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.579672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.579678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.580180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.580190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.580200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.580206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.580683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.580695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.580705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.580711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.581173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.581182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.581192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:09.667 [2024-11-05 16:49:16.581197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.667 passed 00:25:09.667 Test: blockdev nvme passthru rw ...passed 00:25:09.667 Test: blockdev nvme passthru vendor specific ...[2024-11-05 16:49:16.665606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.667 [2024-11-05 16:49:16.665618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.665878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.667 [2024-11-05 16:49:16.665887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.666139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.667 [2024-11-05 16:49:16.666146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.667 [2024-11-05 16:49:16.666366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:09.667 [2024-11-05 16:49:16.666375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.667 passed 00:25:09.667 Test: blockdev nvme admin passthru ...passed 00:25:09.667 Test: blockdev copy ...passed 00:25:09.667 00:25:09.667 Run Summary: Type Total Ran Passed Failed Inactive 00:25:09.667 suites 1 1 n/a 0 0 00:25:09.667 tests 23 23 23 0 0 00:25:09.667 asserts 152 152 152 0 n/a 00:25:09.667 00:25:09.667 Elapsed time = 1.132 seconds 00:25:09.927 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.927 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.927 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:10.188 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:10.188 rmmod nvme_tcp 00:25:10.188 rmmod nvme_fabrics 00:25:10.188 rmmod nvme_keyring 00:25:10.188 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:10.188 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 3183836 ']' 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 3183836 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3183836 ']' 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3183836 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3183836 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3183836' 00:25:10.189 killing process with pid 3183836 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3183836 00:25:10.189 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3183836 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:10.451 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:25:13.000 00:25:13.000 real 0m12.607s 00:25:13.000 user 0m14.938s 00:25:13.000 sys 0m6.635s 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:13.000 ************************************ 00:25:13.000 END TEST nvmf_bdevio_no_huge 00:25:13.000 ************************************ 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:13.000 16:49:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:13.000 ************************************ 00:25:13.000 START TEST nvmf_tls 00:25:13.000 ************************************ 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:13.001 * Looking for test storage... 00:25:13.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.001 --rc genhtml_branch_coverage=1 00:25:13.001 --rc genhtml_function_coverage=1 00:25:13.001 --rc genhtml_legend=1 00:25:13.001 --rc geninfo_all_blocks=1 00:25:13.001 --rc geninfo_unexecuted_blocks=1 00:25:13.001 00:25:13.001 ' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.001 --rc genhtml_branch_coverage=1 00:25:13.001 --rc genhtml_function_coverage=1 00:25:13.001 --rc genhtml_legend=1 00:25:13.001 --rc geninfo_all_blocks=1 00:25:13.001 --rc geninfo_unexecuted_blocks=1 00:25:13.001 00:25:13.001 ' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.001 --rc genhtml_branch_coverage=1 00:25:13.001 --rc genhtml_function_coverage=1 00:25:13.001 --rc genhtml_legend=1 00:25:13.001 --rc geninfo_all_blocks=1 00:25:13.001 --rc geninfo_unexecuted_blocks=1 00:25:13.001 00:25:13.001 ' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.001 --rc genhtml_branch_coverage=1 00:25:13.001 --rc genhtml_function_coverage=1 00:25:13.001 --rc genhtml_legend=1 00:25:13.001 --rc geninfo_all_blocks=1 00:25:13.001 --rc geninfo_unexecuted_blocks=1 00:25:13.001 00:25:13.001 ' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:25:13.001 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:13.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:25:13.002 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:25:21.151 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:21.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:21.152 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:21.152 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:21.152 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@247 -- # create_target_ns 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:21.152 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:21.153 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:21.153 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:21.153 10.0.0.1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:21.153 10.0.0.2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:21.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.607 ms 00:25:21.153 00:25:21.153 --- 10.0.0.1 ping statistics --- 00:25:21.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.153 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:21.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:25:21.153 00:25:21.153 --- 10.0.0.2 ping statistics --- 00:25:21.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.153 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:21.153 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:21.154 ' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3188565 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3188565 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3188565 ']' 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:21.154 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.154 [2024-11-05 16:49:27.492531] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:21.154 [2024-11-05 16:49:27.492649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.154 [2024-11-05 16:49:27.594826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.154 [2024-11-05 16:49:27.644273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.154 [2024-11-05 16:49:27.644351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.154 [2024-11-05 16:49:27.644360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.154 [2024-11-05 16:49:27.644367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.154 [2024-11-05 16:49:27.644374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.154 [2024-11-05 16:49:27.645141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:25:21.416 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:21.678 true 00:25:21.678 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:21.678 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:25:21.678 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:25:21.678 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:25:21.678 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:21.939 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:21.939 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:25:22.201 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:25:22.201 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:25:22.201 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:22.201 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:25:22.201 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.461 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:25:22.461 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:25:22.461 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:25:22.461 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.722 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:25:22.722 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:25:22.722 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:22.983 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:22.983 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:25:22.983 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:25:22.983 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:25:22.983 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:23.244 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:23.244 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.2peQU8JQgC 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.OO02kbRYCJ 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2peQU8JQgC 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.OO02kbRYCJ 00:25:23.505 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:23.766 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:23.766 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.2peQU8JQgC 00:25:23.766 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2peQU8JQgC 00:25:23.766 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:24.027 [2024-11-05 16:49:30.938561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.027 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:24.288 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:24.288 [2024-11-05 16:49:31.243296] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.288 [2024-11-05 16:49:31.243514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.288 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:24.548 malloc0 00:25:24.548 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:24.549 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2peQU8JQgC 00:25:24.810 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:25.070 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2peQU8JQgC 00:25:35.064 Initializing NVMe Controllers 00:25:35.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.064 Initialization complete. Launching workers. 00:25:35.064 ======================================================== 00:25:35.064 Latency(us) 00:25:35.064 Device Information : IOPS MiB/s Average min max 00:25:35.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18627.99 72.77 3435.69 1149.43 5284.50 00:25:35.064 ======================================================== 00:25:35.064 Total : 18627.99 72.77 3435.69 1149.43 5284.50 00:25:35.064 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2peQU8JQgC 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2peQU8JQgC 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3191378 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3191378 /var/tmp/bdevperf.sock 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3191378 ']' 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:35.064 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.064 [2024-11-05 16:49:42.044250] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:35.064 [2024-11-05 16:49:42.044309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191378 ] 00:25:35.064 [2024-11-05 16:49:42.101059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.325 [2024-11-05 16:49:42.130060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.325 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:35.325 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:35.325 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2peQU8JQgC 00:25:35.325 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:35.585 [2024-11-05 16:49:42.507291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.585 TLSTESTn1 00:25:35.585 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:35.845 Running I/O for 10 seconds... 00:25:37.725 6242.00 IOPS, 24.38 MiB/s [2024-11-05T15:49:45.728Z] 6242.00 IOPS, 24.38 MiB/s [2024-11-05T15:49:47.111Z] 5883.33 IOPS, 22.98 MiB/s [2024-11-05T15:49:48.051Z] 5745.75 IOPS, 22.44 MiB/s [2024-11-05T15:49:48.989Z] 5770.00 IOPS, 22.54 MiB/s [2024-11-05T15:49:49.928Z] 5754.00 IOPS, 22.48 MiB/s [2024-11-05T15:49:50.867Z] 5725.14 IOPS, 22.36 MiB/s [2024-11-05T15:49:51.807Z] 5645.88 IOPS, 22.05 MiB/s [2024-11-05T15:49:52.748Z] 5651.89 IOPS, 22.08 MiB/s [2024-11-05T15:49:52.748Z] 5651.60 IOPS, 22.08 MiB/s 00:25:45.685 Latency(us) 00:25:45.685 [2024-11-05T15:49:52.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.685 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:45.685 Verification LBA range: start 0x0 length 0x2000 00:25:45.685 TLSTESTn1 : 10.02 5654.90 22.09 0.00 0.00 22604.53 6089.39 40413.87 00:25:45.685 [2024-11-05T15:49:52.749Z] =================================================================================================================== 00:25:45.686 [2024-11-05T15:49:52.749Z] Total : 5654.90 22.09 0.00 0.00 22604.53 6089.39 40413.87 00:25:45.686 { 00:25:45.686 "results": [ 00:25:45.686 { 00:25:45.686 "job": "TLSTESTn1", 00:25:45.686 "core_mask": "0x4", 00:25:45.686 "workload": "verify", 00:25:45.686 "status": "finished", 00:25:45.686 "verify_range": { 00:25:45.686 "start": 0, 00:25:45.686 "length": 8192 00:25:45.686 }, 00:25:45.686 "queue_depth": 128, 00:25:45.686 "io_size": 4096, 00:25:45.686 "runtime": 10.016807, 00:25:45.686 "iops": 5654.895816601039, 00:25:45.686 "mibps": 22.089436783597808, 00:25:45.686 "io_failed": 0, 00:25:45.686 "io_timeout": 0, 00:25:45.686 "avg_latency_us": 22604.531845679445, 00:25:45.686 "min_latency_us": 6089.386666666666, 00:25:45.686 "max_latency_us": 40413.86666666667 00:25:45.686 } 00:25:45.686 ], 00:25:45.686 "core_count": 1 00:25:45.686 } 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3191378 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3191378 ']' 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3191378 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3191378 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3191378' 00:25:45.946 killing process with pid 3191378 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3191378 00:25:45.946 Received shutdown signal, test time was about 10.000000 seconds 00:25:45.946 00:25:45.946 Latency(us) 00:25:45.946 [2024-11-05T15:49:53.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.946 [2024-11-05T15:49:53.009Z] =================================================================================================================== 00:25:45.946 [2024-11-05T15:49:53.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3191378 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OO02kbRYCJ 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OO02kbRYCJ 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OO02kbRYCJ 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OO02kbRYCJ 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3193629 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3193629 /var/tmp/bdevperf.sock 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3193629 ']' 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:45.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:45.946 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:45.946 [2024-11-05 16:49:52.964027] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:45.947 [2024-11-05 16:49:52.964087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193629 ] 00:25:46.206 [2024-11-05 16:49:53.021215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.207 [2024-11-05 16:49:53.050803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.207 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:46.207 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:46.207 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OO02kbRYCJ 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:46.467 [2024-11-05 16:49:53.468014] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.467 [2024-11-05 16:49:53.474959] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:46.467 [2024-11-05 16:49:53.475066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefabb0 (107): Transport endpoint is not connected 00:25:46.467 [2024-11-05 16:49:53.476053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefabb0 (9): Bad file descriptor 00:25:46.467 [2024-11-05 16:49:53.477055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:46.467 [2024-11-05 16:49:53.477063] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:46.467 [2024-11-05 16:49:53.477068] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:46.467 [2024-11-05 16:49:53.477077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:46.467 request: 00:25:46.467 { 00:25:46.467 "name": "TLSTEST", 00:25:46.467 "trtype": "tcp", 00:25:46.467 "traddr": "10.0.0.2", 00:25:46.467 "adrfam": "ipv4", 00:25:46.467 "trsvcid": "4420", 00:25:46.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.467 "prchk_reftag": false, 00:25:46.467 "prchk_guard": false, 00:25:46.467 "hdgst": false, 00:25:46.467 "ddgst": false, 00:25:46.467 "psk": "key0", 00:25:46.467 "allow_unrecognized_csi": false, 00:25:46.467 "method": "bdev_nvme_attach_controller", 00:25:46.467 "req_id": 1 00:25:46.467 } 00:25:46.467 Got JSON-RPC error response 00:25:46.467 response: 00:25:46.467 { 00:25:46.467 "code": -5, 00:25:46.467 "message": "Input/output error" 00:25:46.467 } 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3193629 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3193629 ']' 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3193629 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.467 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3193629 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3193629' 00:25:46.728 killing process with pid 3193629 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3193629 00:25:46.728 Received shutdown signal, test time was about 10.000000 seconds 00:25:46.728 00:25:46.728 Latency(us) 00:25:46.728 [2024-11-05T15:49:53.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.728 [2024-11-05T15:49:53.791Z] =================================================================================================================== 00:25:46.728 [2024-11-05T15:49:53.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3193629 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2peQU8JQgC 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2peQU8JQgC 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2peQU8JQgC 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2peQU8JQgC 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3193650 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3193650 /var/tmp/bdevperf.sock 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3193650 ']' 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.728 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.728 [2024-11-05 16:49:53.723695] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:46.728 [2024-11-05 16:49:53.723752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193650 ] 00:25:46.728 [2024-11-05 16:49:53.782134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.994 [2024-11-05 16:49:53.810130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.994 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:46.994 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:46.994 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2peQU8JQgC 00:25:47.256 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:47.256 [2024-11-05 16:49:54.231137] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.256 [2024-11-05 16:49:54.236705] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:47.256 [2024-11-05 16:49:54.236725] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:47.256 [2024-11-05 16:49:54.236744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:47.256 [2024-11-05 16:49:54.237442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2320bb0 (107): Transport endpoint is not connected 00:25:47.257 [2024-11-05 16:49:54.238437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2320bb0 (9): Bad file descriptor 00:25:47.257 [2024-11-05 16:49:54.239439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:47.257 [2024-11-05 16:49:54.239448] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:47.257 [2024-11-05 16:49:54.239455] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:47.257 [2024-11-05 16:49:54.239463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:47.257 request: 00:25:47.257 { 00:25:47.257 "name": "TLSTEST", 00:25:47.257 "trtype": "tcp", 00:25:47.257 "traddr": "10.0.0.2", 00:25:47.257 "adrfam": "ipv4", 00:25:47.257 "trsvcid": "4420", 00:25:47.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.257 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:47.257 "prchk_reftag": false, 00:25:47.257 "prchk_guard": false, 00:25:47.257 "hdgst": false, 00:25:47.257 "ddgst": false, 00:25:47.257 "psk": "key0", 00:25:47.257 "allow_unrecognized_csi": false, 00:25:47.257 "method": "bdev_nvme_attach_controller", 00:25:47.257 "req_id": 1 00:25:47.257 } 00:25:47.257 Got JSON-RPC error response 00:25:47.257 response: 00:25:47.257 { 00:25:47.257 "code": -5, 00:25:47.257 "message": "Input/output error" 00:25:47.257 } 00:25:47.257 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3193650 00:25:47.257 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3193650 ']' 00:25:47.257 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3193650 00:25:47.257 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:47.257 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:47.257 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3193650 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3193650' 00:25:47.518 killing process with pid 3193650 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3193650 00:25:47.518 Received shutdown signal, test time was about 10.000000 seconds 00:25:47.518 00:25:47.518 Latency(us) 00:25:47.518 [2024-11-05T15:49:54.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.518 [2024-11-05T15:49:54.581Z] =================================================================================================================== 00:25:47.518 [2024-11-05T15:49:54.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3193650 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2peQU8JQgC 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2peQU8JQgC 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2peQU8JQgC 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2peQU8JQgC 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3193891 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3193891 /var/tmp/bdevperf.sock 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3193891 ']' 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:47.518 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.518 [2024-11-05 16:49:54.485686] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:47.518 [2024-11-05 16:49:54.485743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193891 ] 00:25:47.518 [2024-11-05 16:49:54.544091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.518 [2024-11-05 16:49:54.572896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.779 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:47.779 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:47.779 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2peQU8JQgC 00:25:47.779 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:48.040 [2024-11-05 16:49:54.982112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:48.040 [2024-11-05 16:49:54.988516] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:48.040 [2024-11-05 16:49:54.988533] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:48.040 [2024-11-05 16:49:54.988552] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:48.040 [2024-11-05 16:49:54.989413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb6bb0 (107): Transport endpoint is not connected 00:25:48.040 [2024-11-05 16:49:54.990409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb6bb0 (9): Bad file descriptor 00:25:48.040 [2024-11-05 16:49:54.991412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:25:48.040 [2024-11-05 16:49:54.991420] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:48.040 [2024-11-05 16:49:54.991427] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:48.040 [2024-11-05 16:49:54.991435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:25:48.040 request: 00:25:48.040 { 00:25:48.040 "name": "TLSTEST", 00:25:48.040 "trtype": "tcp", 00:25:48.040 "traddr": "10.0.0.2", 00:25:48.040 "adrfam": "ipv4", 00:25:48.040 "trsvcid": "4420", 00:25:48.040 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:48.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.040 "prchk_reftag": false, 00:25:48.040 "prchk_guard": false, 00:25:48.040 "hdgst": false, 00:25:48.040 "ddgst": false, 00:25:48.040 "psk": "key0", 00:25:48.040 "allow_unrecognized_csi": false, 00:25:48.040 "method": "bdev_nvme_attach_controller", 00:25:48.040 "req_id": 1 00:25:48.040 } 00:25:48.040 Got JSON-RPC error response 00:25:48.040 response: 00:25:48.040 { 00:25:48.040 "code": -5, 00:25:48.040 "message": "Input/output error" 00:25:48.040 } 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3193891 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3193891 ']' 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3193891 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3193891 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3193891' 00:25:48.040 killing process with pid 3193891 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3193891 00:25:48.040 Received shutdown signal, test time was about 10.000000 seconds 00:25:48.040 00:25:48.040 Latency(us) 00:25:48.040 [2024-11-05T15:49:55.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.040 [2024-11-05T15:49:55.103Z] =================================================================================================================== 00:25:48.040 [2024-11-05T15:49:55.103Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:48.040 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3193891 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3194004 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3194004 /var/tmp/bdevperf.sock 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3194004 ']' 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:48.300 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.300 [2024-11-05 16:49:55.235016] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:48.300 [2024-11-05 16:49:55.235072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194004 ] 00:25:48.300 [2024-11-05 16:49:55.293857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.300 [2024-11-05 16:49:55.321414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.561 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:48.561 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:48.561 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:25:48.561 [2024-11-05 16:49:55.553845] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:25:48.561 [2024-11-05 16:49:55.553872] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:48.561 request: 00:25:48.561 { 00:25:48.561 "name": "key0", 00:25:48.561 "path": "", 00:25:48.561 "method": "keyring_file_add_key", 00:25:48.561 "req_id": 1 00:25:48.561 } 00:25:48.561 Got JSON-RPC error response 00:25:48.561 response: 00:25:48.561 { 00:25:48.561 "code": -1, 00:25:48.561 "message": "Operation not permitted" 00:25:48.561 } 00:25:48.561 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:48.821 [2024-11-05 16:49:55.734384] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:48.821 [2024-11-05 16:49:55.734408] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:48.821 request: 00:25:48.821 { 00:25:48.821 "name": "TLSTEST", 00:25:48.821 "trtype": "tcp", 00:25:48.821 "traddr": "10.0.0.2", 00:25:48.821 "adrfam": "ipv4", 00:25:48.821 "trsvcid": "4420", 00:25:48.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.821 "prchk_reftag": false, 00:25:48.821 "prchk_guard": false, 00:25:48.821 "hdgst": false, 00:25:48.821 "ddgst": false, 00:25:48.821 "psk": "key0", 00:25:48.821 "allow_unrecognized_csi": false, 00:25:48.821 "method": "bdev_nvme_attach_controller", 00:25:48.821 "req_id": 1 00:25:48.821 } 00:25:48.821 Got JSON-RPC error response 00:25:48.821 response: 00:25:48.821 { 00:25:48.821 "code": -126, 00:25:48.821 "message": "Required key not available" 00:25:48.821 } 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3194004 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3194004 ']' 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3194004 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3194004 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3194004' 00:25:48.821 killing process with pid 3194004 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3194004 00:25:48.821 Received shutdown signal, test time was about 10.000000 seconds 00:25:48.821 00:25:48.821 Latency(us) 00:25:48.821 [2024-11-05T15:49:55.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.821 [2024-11-05T15:49:55.884Z] =================================================================================================================== 00:25:48.821 [2024-11-05T15:49:55.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:48.821 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3194004 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3188565 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3188565 ']' 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3188565 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3188565 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3188565' 00:25:49.082 killing process with pid 3188565 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3188565 00:25:49.082 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3188565 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:25:49.082 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.kLnevLjGsF 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.kLnevLjGsF 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3194308 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3194308 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3194308 ']' 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:49.432 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.432 [2024-11-05 16:49:56.220513] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:49.432 [2024-11-05 16:49:56.220578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.432 [2024-11-05 16:49:56.310868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.432 [2024-11-05 16:49:56.342367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.432 [2024-11-05 16:49:56.342396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.432 [2024-11-05 16:49:56.342402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.432 [2024-11-05 16:49:56.342407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.432 [2024-11-05 16:49:56.342411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.432 [2024-11-05 16:49:56.342875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.061 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:50.061 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:50.061 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:50.061 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:50.061 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.061 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.061 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.kLnevLjGsF 00:25:50.062 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kLnevLjGsF 00:25:50.062 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:50.322 [2024-11-05 16:49:57.195111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.322 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:50.322 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:50.582 [2024-11-05 16:49:57.515904] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:50.582 [2024-11-05 16:49:57.516105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.582 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:50.842 malloc0 00:25:50.842 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:50.842 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:25:51.101 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:51.361 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLnevLjGsF 00:25:51.361 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:51.361 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:51.361 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:51.361 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kLnevLjGsF 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3194714 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3194714 /var/tmp/bdevperf.sock 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3194714 ']' 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.362 [2024-11-05 16:49:58.236620] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:25:51.362 [2024-11-05 16:49:58.236674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194714 ] 00:25:51.362 [2024-11-05 16:49:58.294026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.362 [2024-11-05 16:49:58.323136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:51.362 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:25:51.622 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:51.882 [2024-11-05 16:49:58.740487] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:51.882 TLSTESTn1 00:25:51.882 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:51.882 Running I/O for 10 seconds... 00:25:54.205 6147.00 IOPS, 24.01 MiB/s [2024-11-05T15:50:02.249Z] 5950.00 IOPS, 23.24 MiB/s [2024-11-05T15:50:03.190Z] 5922.00 IOPS, 23.13 MiB/s [2024-11-05T15:50:04.130Z] 5757.00 IOPS, 22.49 MiB/s [2024-11-05T15:50:05.070Z] 5836.80 IOPS, 22.80 MiB/s [2024-11-05T15:50:06.011Z] 5767.33 IOPS, 22.53 MiB/s [2024-11-05T15:50:06.951Z] 5735.43 IOPS, 22.40 MiB/s [2024-11-05T15:50:08.333Z] 5705.62 IOPS, 22.29 MiB/s [2024-11-05T15:50:09.274Z] 5763.33 IOPS, 22.51 MiB/s [2024-11-05T15:50:09.274Z] 5711.00 IOPS, 22.31 MiB/s 00:26:02.211 Latency(us) 00:26:02.211 [2024-11-05T15:50:09.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.211 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.211 Verification LBA range: start 0x0 length 0x2000 00:26:02.211 TLSTESTn1 : 10.02 5712.43 22.31 0.00 0.00 22371.63 5980.16 27962.03 00:26:02.211 [2024-11-05T15:50:09.274Z] =================================================================================================================== 00:26:02.211 [2024-11-05T15:50:09.274Z] Total : 5712.43 22.31 0.00 0.00 22371.63 5980.16 27962.03 00:26:02.211 { 00:26:02.211 "results": [ 00:26:02.211 { 00:26:02.212 "job": "TLSTESTn1", 00:26:02.212 "core_mask": "0x4", 00:26:02.212 "workload": "verify", 00:26:02.212 "status": "finished", 00:26:02.212 "verify_range": { 00:26:02.212 "start": 0, 00:26:02.212 "length": 8192 00:26:02.212 }, 00:26:02.212 "queue_depth": 128, 00:26:02.212 "io_size": 4096, 00:26:02.212 "runtime": 10.0199, 00:26:02.212 "iops": 5712.432259802992, 00:26:02.212 "mibps": 22.314188514855438, 00:26:02.212 "io_failed": 0, 00:26:02.212 "io_timeout": 0, 00:26:02.212 "avg_latency_us": 22371.628090429436, 00:26:02.212 "min_latency_us": 5980.16, 00:26:02.212 "max_latency_us": 27962.02666666667 00:26:02.212 } 00:26:02.212 ], 00:26:02.212 "core_count": 1 00:26:02.212 } 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3194714 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3194714 ']' 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3194714 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.212 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3194714 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3194714' 00:26:02.212 killing process with pid 3194714 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3194714 00:26:02.212 Received shutdown signal, test time was about 10.000000 seconds 00:26:02.212 00:26:02.212 Latency(us) 00:26:02.212 [2024-11-05T15:50:09.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.212 [2024-11-05T15:50:09.275Z] =================================================================================================================== 00:26:02.212 [2024-11-05T15:50:09.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3194714 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.kLnevLjGsF 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLnevLjGsF 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLnevLjGsF 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLnevLjGsF 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kLnevLjGsF 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3196731 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3196731 /var/tmp/bdevperf.sock 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3196731 ']' 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:02.212 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.212 [2024-11-05 16:50:09.208495] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:02.212 [2024-11-05 16:50:09.208552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196731 ] 00:26:02.212 [2024-11-05 16:50:09.267115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.472 [2024-11-05 16:50:09.295428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.472 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.472 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:02.472 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:02.472 [2024-11-05 16:50:09.532045] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kLnevLjGsF': 0100666 00:26:02.472 [2024-11-05 16:50:09.532071] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:02.737 request: 00:26:02.737 { 00:26:02.737 "name": "key0", 00:26:02.737 "path": "/tmp/tmp.kLnevLjGsF", 00:26:02.737 "method": "keyring_file_add_key", 00:26:02.737 "req_id": 1 00:26:02.737 } 00:26:02.737 Got JSON-RPC error response 00:26:02.737 response: 00:26:02.737 { 00:26:02.737 "code": -1, 00:26:02.737 "message": "Operation not permitted" 00:26:02.737 } 00:26:02.737 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:02.737 [2024-11-05 16:50:09.716587] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.737 [2024-11-05 16:50:09.716615] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:26:02.737 request: 00:26:02.737 { 00:26:02.738 "name": "TLSTEST", 00:26:02.738 "trtype": "tcp", 00:26:02.738 "traddr": "10.0.0.2", 00:26:02.738 "adrfam": "ipv4", 00:26:02.738 "trsvcid": "4420", 00:26:02.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.738 "prchk_reftag": false, 00:26:02.738 "prchk_guard": false, 00:26:02.738 "hdgst": false, 00:26:02.738 "ddgst": false, 00:26:02.738 "psk": "key0", 00:26:02.738 "allow_unrecognized_csi": false, 00:26:02.738 "method": "bdev_nvme_attach_controller", 00:26:02.738 "req_id": 1 00:26:02.738 } 00:26:02.738 Got JSON-RPC error response 00:26:02.738 response: 00:26:02.738 { 00:26:02.738 "code": -126, 00:26:02.738 "message": "Required key not available" 00:26:02.738 } 00:26:02.738 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3196731 00:26:02.738 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3196731 ']' 00:26:02.738 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3196731 00:26:02.738 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:02.738 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.738 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3196731 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3196731' 00:26:02.999 killing process with pid 3196731 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3196731 00:26:02.999 Received shutdown signal, test time was about 10.000000 seconds 00:26:02.999 00:26:02.999 Latency(us) 00:26:02.999 [2024-11-05T15:50:10.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.999 [2024-11-05T15:50:10.062Z] =================================================================================================================== 00:26:02.999 [2024-11-05T15:50:10.062Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3196731 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3194308 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3194308 ']' 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3194308 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3194308 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3194308' 00:26:02.999 killing process with pid 3194308 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3194308 00:26:02.999 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3194308 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3197057 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3197057 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3197057 ']' 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:03.260 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.260 [2024-11-05 16:50:10.137626] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:03.260 [2024-11-05 16:50:10.137684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.260 [2024-11-05 16:50:10.225581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.260 [2024-11-05 16:50:10.254067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.260 [2024-11-05 16:50:10.254099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.260 [2024-11-05 16:50:10.254105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.260 [2024-11-05 16:50:10.254109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.260 [2024-11-05 16:50:10.254113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.260 [2024-11-05 16:50:10.254581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.kLnevLjGsF 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kLnevLjGsF 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.kLnevLjGsF 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kLnevLjGsF 00:26:04.201 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:04.201 [2024-11-05 16:50:11.125606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.201 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:04.461 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:04.461 [2024-11-05 16:50:11.462432] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:04.461 [2024-11-05 16:50:11.462653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.461 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:04.721 malloc0 00:26:04.721 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:04.981 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:04.981 [2024-11-05 16:50:11.965603] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kLnevLjGsF': 0100666 00:26:04.981 [2024-11-05 16:50:11.965628] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:04.981 request: 00:26:04.981 { 00:26:04.981 "name": "key0", 00:26:04.981 "path": "/tmp/tmp.kLnevLjGsF", 00:26:04.981 "method": "keyring_file_add_key", 00:26:04.981 "req_id": 1 00:26:04.981 } 00:26:04.981 Got JSON-RPC error response 00:26:04.981 response: 00:26:04.981 { 00:26:04.981 "code": -1, 00:26:04.981 "message": "Operation not permitted" 00:26:04.981 } 00:26:04.981 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:05.242 [2024-11-05 16:50:12.134043] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:26:05.242 [2024-11-05 16:50:12.134070] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:05.242 request: 00:26:05.242 { 00:26:05.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.242 "host": "nqn.2016-06.io.spdk:host1", 00:26:05.242 "psk": "key0", 00:26:05.242 "method": "nvmf_subsystem_add_host", 00:26:05.242 "req_id": 1 00:26:05.242 } 00:26:05.242 Got JSON-RPC error response 00:26:05.242 response: 00:26:05.242 { 00:26:05.242 "code": -32603, 00:26:05.242 "message": "Internal error" 00:26:05.242 } 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3197057 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3197057 ']' 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3197057 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3197057 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3197057' 00:26:05.242 killing process with pid 3197057 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3197057 00:26:05.242 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3197057 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.kLnevLjGsF 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3197447 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3197447 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3197447 ']' 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:05.502 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:05.502 [2024-11-05 16:50:12.388516] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:05.502 [2024-11-05 16:50:12.388581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.503 [2024-11-05 16:50:12.475667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.503 [2024-11-05 16:50:12.502883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.503 [2024-11-05 16:50:12.502914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.503 [2024-11-05 16:50:12.502919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.503 [2024-11-05 16:50:12.502925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.503 [2024-11-05 16:50:12.502930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.503 [2024-11-05 16:50:12.503415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.kLnevLjGsF 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kLnevLjGsF 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:05.763 [2024-11-05 16:50:12.764884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.763 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:06.023 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:06.023 [2024-11-05 16:50:13.085676] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:06.023 [2024-11-05 16:50:13.085903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.283 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:06.283 malloc0 00:26:06.283 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:06.544 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:06.544 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3197811 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3197811 /var/tmp/bdevperf.sock 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3197811 ']' 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:06.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:06.804 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:06.804 [2024-11-05 16:50:13.810493] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:06.804 [2024-11-05 16:50:13.810546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197811 ] 00:26:07.065 [2024-11-05 16:50:13.869425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.065 [2024-11-05 16:50:13.898424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.065 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:07.065 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:07.065 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:07.324 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:07.325 [2024-11-05 16:50:14.315570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:07.325 TLSTESTn1 00:26:07.585 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:07.847 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:26:07.847 "subsystems": [ 00:26:07.847 { 00:26:07.847 "subsystem": "keyring", 00:26:07.847 "config": [ 00:26:07.847 { 00:26:07.847 "method": "keyring_file_add_key", 00:26:07.847 "params": { 00:26:07.847 "name": "key0", 00:26:07.847 "path": "/tmp/tmp.kLnevLjGsF" 00:26:07.847 } 00:26:07.847 } 00:26:07.847 ] 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "subsystem": "iobuf", 00:26:07.847 "config": [ 00:26:07.847 { 00:26:07.847 "method": "iobuf_set_options", 00:26:07.847 "params": { 00:26:07.847 "small_pool_count": 8192, 00:26:07.847 "large_pool_count": 1024, 00:26:07.847 "small_bufsize": 8192, 00:26:07.847 "large_bufsize": 135168, 00:26:07.847 "enable_numa": false 00:26:07.847 } 00:26:07.847 } 00:26:07.847 ] 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "subsystem": "sock", 00:26:07.847 "config": [ 00:26:07.847 { 00:26:07.847 "method": "sock_set_default_impl", 00:26:07.847 "params": { 00:26:07.847 "impl_name": "posix" 00:26:07.847 } 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "method": "sock_impl_set_options", 00:26:07.847 "params": { 00:26:07.847 "impl_name": "ssl", 00:26:07.847 "recv_buf_size": 4096, 00:26:07.847 "send_buf_size": 4096, 00:26:07.847 "enable_recv_pipe": true, 00:26:07.847 "enable_quickack": false, 00:26:07.847 "enable_placement_id": 0, 00:26:07.847 "enable_zerocopy_send_server": true, 00:26:07.847 "enable_zerocopy_send_client": false, 00:26:07.847 "zerocopy_threshold": 0, 00:26:07.847 "tls_version": 0, 00:26:07.847 "enable_ktls": false 00:26:07.847 } 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "method": "sock_impl_set_options", 00:26:07.847 "params": { 00:26:07.847 "impl_name": "posix", 00:26:07.847 "recv_buf_size": 2097152, 00:26:07.847 "send_buf_size": 2097152, 00:26:07.847 "enable_recv_pipe": true, 00:26:07.847 "enable_quickack": false, 00:26:07.847 "enable_placement_id": 0, 00:26:07.847 "enable_zerocopy_send_server": true, 00:26:07.847 "enable_zerocopy_send_client": false, 00:26:07.847 "zerocopy_threshold": 0, 00:26:07.847 "tls_version": 0, 00:26:07.847 "enable_ktls": false 00:26:07.847 } 00:26:07.847 } 00:26:07.847 ] 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "subsystem": "vmd", 00:26:07.847 "config": [] 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "subsystem": "accel", 00:26:07.847 "config": [ 00:26:07.847 { 00:26:07.847 "method": "accel_set_options", 00:26:07.847 "params": { 00:26:07.847 "small_cache_size": 128, 00:26:07.847 "large_cache_size": 16, 00:26:07.847 "task_count": 2048, 00:26:07.847 "sequence_count": 2048, 00:26:07.847 "buf_count": 2048 00:26:07.847 } 00:26:07.847 } 00:26:07.847 ] 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "subsystem": "bdev", 00:26:07.847 "config": [ 00:26:07.847 { 00:26:07.847 "method": "bdev_set_options", 00:26:07.847 "params": { 00:26:07.847 "bdev_io_pool_size": 65535, 00:26:07.847 "bdev_io_cache_size": 256, 00:26:07.847 "bdev_auto_examine": true, 00:26:07.847 "iobuf_small_cache_size": 128, 00:26:07.847 "iobuf_large_cache_size": 16 00:26:07.847 } 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "method": "bdev_raid_set_options", 00:26:07.847 "params": { 00:26:07.847 "process_window_size_kb": 1024, 00:26:07.847 "process_max_bandwidth_mb_sec": 0 00:26:07.847 } 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "method": "bdev_iscsi_set_options", 00:26:07.847 "params": { 00:26:07.847 "timeout_sec": 30 00:26:07.847 } 00:26:07.847 }, 00:26:07.847 { 00:26:07.847 "method": "bdev_nvme_set_options", 00:26:07.847 "params": { 00:26:07.847 "action_on_timeout": "none", 00:26:07.847 "timeout_us": 0, 00:26:07.847 "timeout_admin_us": 0, 00:26:07.847 "keep_alive_timeout_ms": 10000, 00:26:07.847 "arbitration_burst": 0, 00:26:07.847 "low_priority_weight": 0, 00:26:07.847 "medium_priority_weight": 0, 00:26:07.847 "high_priority_weight": 0, 00:26:07.847 "nvme_adminq_poll_period_us": 10000, 00:26:07.847 "nvme_ioq_poll_period_us": 0, 00:26:07.847 "io_queue_requests": 0, 00:26:07.847 "delay_cmd_submit": true, 00:26:07.847 "transport_retry_count": 4, 00:26:07.847 "bdev_retry_count": 3, 00:26:07.847 "transport_ack_timeout": 0, 00:26:07.847 "ctrlr_loss_timeout_sec": 0, 00:26:07.847 "reconnect_delay_sec": 0, 00:26:07.847 "fast_io_fail_timeout_sec": 0, 00:26:07.847 "disable_auto_failback": false, 00:26:07.847 "generate_uuids": false, 00:26:07.847 "transport_tos": 0, 00:26:07.847 "nvme_error_stat": false, 00:26:07.847 "rdma_srq_size": 0, 00:26:07.847 "io_path_stat": false, 00:26:07.847 "allow_accel_sequence": false, 00:26:07.847 "rdma_max_cq_size": 0, 00:26:07.847 "rdma_cm_event_timeout_ms": 0, 00:26:07.847 "dhchap_digests": [ 00:26:07.847 "sha256", 00:26:07.848 "sha384", 00:26:07.848 "sha512" 00:26:07.848 ], 00:26:07.848 "dhchap_dhgroups": [ 00:26:07.848 "null", 00:26:07.848 "ffdhe2048", 00:26:07.848 "ffdhe3072", 00:26:07.848 "ffdhe4096", 00:26:07.848 "ffdhe6144", 00:26:07.848 "ffdhe8192" 00:26:07.848 ] 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "bdev_nvme_set_hotplug", 00:26:07.848 "params": { 00:26:07.848 "period_us": 100000, 00:26:07.848 "enable": false 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "bdev_malloc_create", 00:26:07.848 "params": { 00:26:07.848 "name": "malloc0", 00:26:07.848 "num_blocks": 8192, 00:26:07.848 "block_size": 4096, 00:26:07.848 "physical_block_size": 4096, 00:26:07.848 "uuid": "cc6ec6ad-7e1d-4101-bdaf-fe327d3df01e", 00:26:07.848 "optimal_io_boundary": 0, 00:26:07.848 "md_size": 0, 00:26:07.848 "dif_type": 0, 00:26:07.848 "dif_is_head_of_md": false, 00:26:07.848 "dif_pi_format": 0 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "bdev_wait_for_examine" 00:26:07.848 } 00:26:07.848 ] 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "subsystem": "nbd", 00:26:07.848 "config": [] 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "subsystem": "scheduler", 00:26:07.848 "config": [ 00:26:07.848 { 00:26:07.848 "method": "framework_set_scheduler", 00:26:07.848 "params": { 00:26:07.848 "name": "static" 00:26:07.848 } 00:26:07.848 } 00:26:07.848 ] 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "subsystem": "nvmf", 00:26:07.848 "config": [ 00:26:07.848 { 00:26:07.848 "method": "nvmf_set_config", 00:26:07.848 "params": { 00:26:07.848 "discovery_filter": "match_any", 00:26:07.848 "admin_cmd_passthru": { 00:26:07.848 "identify_ctrlr": false 00:26:07.848 }, 00:26:07.848 "dhchap_digests": [ 00:26:07.848 "sha256", 00:26:07.848 "sha384", 00:26:07.848 "sha512" 00:26:07.848 ], 00:26:07.848 "dhchap_dhgroups": [ 00:26:07.848 "null", 00:26:07.848 "ffdhe2048", 00:26:07.848 "ffdhe3072", 00:26:07.848 "ffdhe4096", 00:26:07.848 "ffdhe6144", 00:26:07.848 "ffdhe8192" 00:26:07.848 ] 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_set_max_subsystems", 00:26:07.848 "params": { 00:26:07.848 "max_subsystems": 1024 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_set_crdt", 00:26:07.848 "params": { 00:26:07.848 "crdt1": 0, 00:26:07.848 "crdt2": 0, 00:26:07.848 "crdt3": 0 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_create_transport", 00:26:07.848 "params": { 00:26:07.848 "trtype": "TCP", 00:26:07.848 "max_queue_depth": 128, 00:26:07.848 "max_io_qpairs_per_ctrlr": 127, 00:26:07.848 "in_capsule_data_size": 4096, 00:26:07.848 "max_io_size": 131072, 00:26:07.848 "io_unit_size": 131072, 00:26:07.848 "max_aq_depth": 128, 00:26:07.848 "num_shared_buffers": 511, 00:26:07.848 "buf_cache_size": 4294967295, 00:26:07.848 "dif_insert_or_strip": false, 00:26:07.848 "zcopy": false, 00:26:07.848 "c2h_success": false, 00:26:07.848 "sock_priority": 0, 00:26:07.848 "abort_timeout_sec": 1, 00:26:07.848 "ack_timeout": 0, 00:26:07.848 "data_wr_pool_size": 0 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_create_subsystem", 00:26:07.848 "params": { 00:26:07.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.848 "allow_any_host": false, 00:26:07.848 "serial_number": "SPDK00000000000001", 00:26:07.848 "model_number": "SPDK bdev Controller", 00:26:07.848 "max_namespaces": 10, 00:26:07.848 "min_cntlid": 1, 00:26:07.848 "max_cntlid": 65519, 00:26:07.848 "ana_reporting": false 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_subsystem_add_host", 00:26:07.848 "params": { 00:26:07.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.848 "host": "nqn.2016-06.io.spdk:host1", 00:26:07.848 "psk": "key0" 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_subsystem_add_ns", 00:26:07.848 "params": { 00:26:07.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.848 "namespace": { 00:26:07.848 "nsid": 1, 00:26:07.848 "bdev_name": "malloc0", 00:26:07.848 "nguid": "CC6EC6AD7E1D4101BDAFFE327D3DF01E", 00:26:07.848 "uuid": "cc6ec6ad-7e1d-4101-bdaf-fe327d3df01e", 00:26:07.848 "no_auto_visible": false 00:26:07.848 } 00:26:07.848 } 00:26:07.848 }, 00:26:07.848 { 00:26:07.848 "method": "nvmf_subsystem_add_listener", 00:26:07.848 "params": { 00:26:07.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.848 "listen_address": { 00:26:07.848 "trtype": "TCP", 00:26:07.848 "adrfam": "IPv4", 00:26:07.848 "traddr": "10.0.0.2", 00:26:07.848 "trsvcid": "4420" 00:26:07.848 }, 00:26:07.848 "secure_channel": true 00:26:07.848 } 00:26:07.848 } 00:26:07.848 ] 00:26:07.848 } 00:26:07.848 ] 00:26:07.848 }' 00:26:07.848 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:26:08.109 "subsystems": [ 00:26:08.109 { 00:26:08.109 "subsystem": "keyring", 00:26:08.109 "config": [ 00:26:08.109 { 00:26:08.109 "method": "keyring_file_add_key", 00:26:08.109 "params": { 00:26:08.109 "name": "key0", 00:26:08.109 "path": "/tmp/tmp.kLnevLjGsF" 00:26:08.109 } 00:26:08.109 } 00:26:08.109 ] 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "subsystem": "iobuf", 00:26:08.109 "config": [ 00:26:08.109 { 00:26:08.109 "method": "iobuf_set_options", 00:26:08.109 "params": { 00:26:08.109 "small_pool_count": 8192, 00:26:08.109 "large_pool_count": 1024, 00:26:08.109 "small_bufsize": 8192, 00:26:08.109 "large_bufsize": 135168, 00:26:08.109 "enable_numa": false 00:26:08.109 } 00:26:08.109 } 00:26:08.109 ] 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "subsystem": "sock", 00:26:08.109 "config": [ 00:26:08.109 { 00:26:08.109 "method": "sock_set_default_impl", 00:26:08.109 "params": { 00:26:08.109 "impl_name": "posix" 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "sock_impl_set_options", 00:26:08.109 "params": { 00:26:08.109 "impl_name": "ssl", 00:26:08.109 "recv_buf_size": 4096, 00:26:08.109 "send_buf_size": 4096, 00:26:08.109 "enable_recv_pipe": true, 00:26:08.109 "enable_quickack": false, 00:26:08.109 "enable_placement_id": 0, 00:26:08.109 "enable_zerocopy_send_server": true, 00:26:08.109 "enable_zerocopy_send_client": false, 00:26:08.109 "zerocopy_threshold": 0, 00:26:08.109 "tls_version": 0, 00:26:08.109 "enable_ktls": false 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "sock_impl_set_options", 00:26:08.109 "params": { 00:26:08.109 "impl_name": "posix", 00:26:08.109 "recv_buf_size": 2097152, 00:26:08.109 "send_buf_size": 2097152, 00:26:08.109 "enable_recv_pipe": true, 00:26:08.109 "enable_quickack": false, 00:26:08.109 "enable_placement_id": 0, 00:26:08.109 "enable_zerocopy_send_server": true, 00:26:08.109 "enable_zerocopy_send_client": false, 00:26:08.109 "zerocopy_threshold": 0, 00:26:08.109 "tls_version": 0, 00:26:08.109 "enable_ktls": false 00:26:08.109 } 00:26:08.109 } 00:26:08.109 ] 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "subsystem": "vmd", 00:26:08.109 "config": [] 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "subsystem": "accel", 00:26:08.109 "config": [ 00:26:08.109 { 00:26:08.109 "method": "accel_set_options", 00:26:08.109 "params": { 00:26:08.109 "small_cache_size": 128, 00:26:08.109 "large_cache_size": 16, 00:26:08.109 "task_count": 2048, 00:26:08.109 "sequence_count": 2048, 00:26:08.109 "buf_count": 2048 00:26:08.109 } 00:26:08.109 } 00:26:08.109 ] 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "subsystem": "bdev", 00:26:08.109 "config": [ 00:26:08.109 { 00:26:08.109 "method": "bdev_set_options", 00:26:08.109 "params": { 00:26:08.109 "bdev_io_pool_size": 65535, 00:26:08.109 "bdev_io_cache_size": 256, 00:26:08.109 "bdev_auto_examine": true, 00:26:08.109 "iobuf_small_cache_size": 128, 00:26:08.109 "iobuf_large_cache_size": 16 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "bdev_raid_set_options", 00:26:08.109 "params": { 00:26:08.109 "process_window_size_kb": 1024, 00:26:08.109 "process_max_bandwidth_mb_sec": 0 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "bdev_iscsi_set_options", 00:26:08.109 "params": { 00:26:08.109 "timeout_sec": 30 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "bdev_nvme_set_options", 00:26:08.109 "params": { 00:26:08.109 "action_on_timeout": "none", 00:26:08.109 "timeout_us": 0, 00:26:08.109 "timeout_admin_us": 0, 00:26:08.109 "keep_alive_timeout_ms": 10000, 00:26:08.109 "arbitration_burst": 0, 00:26:08.109 "low_priority_weight": 0, 00:26:08.109 "medium_priority_weight": 0, 00:26:08.109 "high_priority_weight": 0, 00:26:08.109 "nvme_adminq_poll_period_us": 10000, 00:26:08.109 "nvme_ioq_poll_period_us": 0, 00:26:08.109 "io_queue_requests": 512, 00:26:08.109 "delay_cmd_submit": true, 00:26:08.109 "transport_retry_count": 4, 00:26:08.109 "bdev_retry_count": 3, 00:26:08.109 "transport_ack_timeout": 0, 00:26:08.109 "ctrlr_loss_timeout_sec": 0, 00:26:08.109 "reconnect_delay_sec": 0, 00:26:08.109 "fast_io_fail_timeout_sec": 0, 00:26:08.109 "disable_auto_failback": false, 00:26:08.109 "generate_uuids": false, 00:26:08.109 "transport_tos": 0, 00:26:08.109 "nvme_error_stat": false, 00:26:08.109 "rdma_srq_size": 0, 00:26:08.109 "io_path_stat": false, 00:26:08.109 "allow_accel_sequence": false, 00:26:08.109 "rdma_max_cq_size": 0, 00:26:08.109 "rdma_cm_event_timeout_ms": 0, 00:26:08.109 "dhchap_digests": [ 00:26:08.109 "sha256", 00:26:08.109 "sha384", 00:26:08.109 "sha512" 00:26:08.109 ], 00:26:08.109 "dhchap_dhgroups": [ 00:26:08.109 "null", 00:26:08.109 "ffdhe2048", 00:26:08.109 "ffdhe3072", 00:26:08.109 "ffdhe4096", 00:26:08.109 "ffdhe6144", 00:26:08.109 "ffdhe8192" 00:26:08.109 ] 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "bdev_nvme_attach_controller", 00:26:08.109 "params": { 00:26:08.109 "name": "TLSTEST", 00:26:08.109 "trtype": "TCP", 00:26:08.109 "adrfam": "IPv4", 00:26:08.109 "traddr": "10.0.0.2", 00:26:08.109 "trsvcid": "4420", 00:26:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.109 "prchk_reftag": false, 00:26:08.109 "prchk_guard": false, 00:26:08.109 "ctrlr_loss_timeout_sec": 0, 00:26:08.109 "reconnect_delay_sec": 0, 00:26:08.109 "fast_io_fail_timeout_sec": 0, 00:26:08.109 "psk": "key0", 00:26:08.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:08.109 "hdgst": false, 00:26:08.109 "ddgst": false, 00:26:08.109 "multipath": "multipath" 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "bdev_nvme_set_hotplug", 00:26:08.109 "params": { 00:26:08.109 "period_us": 100000, 00:26:08.109 "enable": false 00:26:08.109 } 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "method": "bdev_wait_for_examine" 00:26:08.109 } 00:26:08.109 ] 00:26:08.109 }, 00:26:08.109 { 00:26:08.109 "subsystem": "nbd", 00:26:08.109 "config": [] 00:26:08.109 } 00:26:08.109 ] 00:26:08.109 }' 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3197811 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3197811 ']' 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3197811 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:08.109 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3197811 00:26:08.110 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:08.110 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:08.110 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3197811' 00:26:08.110 killing process with pid 3197811 00:26:08.110 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3197811 00:26:08.110 Received shutdown signal, test time was about 10.000000 seconds 00:26:08.110 00:26:08.110 Latency(us) 00:26:08.110 [2024-11-05T15:50:15.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.110 [2024-11-05T15:50:15.173Z] =================================================================================================================== 00:26:08.110 [2024-11-05T15:50:15.173Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:08.110 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3197811 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3197447 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3197447 ']' 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3197447 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3197447 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3197447' 00:26:08.110 killing process with pid 3197447 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3197447 00:26:08.110 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3197447 00:26:08.370 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:08.370 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:08.370 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.370 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.370 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:26:08.370 "subsystems": [ 00:26:08.370 { 00:26:08.370 "subsystem": "keyring", 00:26:08.370 "config": [ 00:26:08.370 { 00:26:08.370 "method": "keyring_file_add_key", 00:26:08.370 "params": { 00:26:08.370 "name": "key0", 00:26:08.370 "path": "/tmp/tmp.kLnevLjGsF" 00:26:08.370 } 00:26:08.370 } 00:26:08.371 ] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "iobuf", 00:26:08.371 "config": [ 00:26:08.371 { 00:26:08.371 "method": "iobuf_set_options", 00:26:08.371 "params": { 00:26:08.371 "small_pool_count": 8192, 00:26:08.371 "large_pool_count": 1024, 00:26:08.371 "small_bufsize": 8192, 00:26:08.371 "large_bufsize": 135168, 00:26:08.371 "enable_numa": false 00:26:08.371 } 00:26:08.371 } 00:26:08.371 ] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "sock", 00:26:08.371 "config": [ 00:26:08.371 { 00:26:08.371 "method": "sock_set_default_impl", 00:26:08.371 "params": { 00:26:08.371 "impl_name": "posix" 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "sock_impl_set_options", 00:26:08.371 "params": { 00:26:08.371 "impl_name": "ssl", 00:26:08.371 "recv_buf_size": 4096, 00:26:08.371 "send_buf_size": 4096, 00:26:08.371 "enable_recv_pipe": true, 00:26:08.371 "enable_quickack": false, 00:26:08.371 "enable_placement_id": 0, 00:26:08.371 "enable_zerocopy_send_server": true, 00:26:08.371 "enable_zerocopy_send_client": false, 00:26:08.371 "zerocopy_threshold": 0, 00:26:08.371 "tls_version": 0, 00:26:08.371 "enable_ktls": false 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "sock_impl_set_options", 00:26:08.371 "params": { 00:26:08.371 "impl_name": "posix", 00:26:08.371 "recv_buf_size": 2097152, 00:26:08.371 "send_buf_size": 2097152, 00:26:08.371 "enable_recv_pipe": true, 00:26:08.371 "enable_quickack": false, 00:26:08.371 "enable_placement_id": 0, 00:26:08.371 "enable_zerocopy_send_server": true, 00:26:08.371 "enable_zerocopy_send_client": false, 00:26:08.371 "zerocopy_threshold": 0, 00:26:08.371 "tls_version": 0, 00:26:08.371 "enable_ktls": false 00:26:08.371 } 00:26:08.371 } 00:26:08.371 ] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "vmd", 00:26:08.371 "config": [] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "accel", 00:26:08.371 "config": [ 00:26:08.371 { 00:26:08.371 "method": "accel_set_options", 00:26:08.371 "params": { 00:26:08.371 "small_cache_size": 128, 00:26:08.371 "large_cache_size": 16, 00:26:08.371 "task_count": 2048, 00:26:08.371 "sequence_count": 2048, 00:26:08.371 "buf_count": 2048 00:26:08.371 } 00:26:08.371 } 00:26:08.371 ] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "bdev", 00:26:08.371 "config": [ 00:26:08.371 { 00:26:08.371 "method": "bdev_set_options", 00:26:08.371 "params": { 00:26:08.371 "bdev_io_pool_size": 65535, 00:26:08.371 "bdev_io_cache_size": 256, 00:26:08.371 "bdev_auto_examine": true, 00:26:08.371 "iobuf_small_cache_size": 128, 00:26:08.371 "iobuf_large_cache_size": 16 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "bdev_raid_set_options", 00:26:08.371 "params": { 00:26:08.371 "process_window_size_kb": 1024, 00:26:08.371 "process_max_bandwidth_mb_sec": 0 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "bdev_iscsi_set_options", 00:26:08.371 "params": { 00:26:08.371 "timeout_sec": 30 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "bdev_nvme_set_options", 00:26:08.371 "params": { 00:26:08.371 "action_on_timeout": "none", 00:26:08.371 "timeout_us": 0, 00:26:08.371 "timeout_admin_us": 0, 00:26:08.371 "keep_alive_timeout_ms": 10000, 00:26:08.371 "arbitration_burst": 0, 00:26:08.371 "low_priority_weight": 0, 00:26:08.371 "medium_priority_weight": 0, 00:26:08.371 "high_priority_weight": 0, 00:26:08.371 "nvme_adminq_poll_period_us": 10000, 00:26:08.371 "nvme_ioq_poll_period_us": 0, 00:26:08.371 "io_queue_requests": 0, 00:26:08.371 "delay_cmd_submit": true, 00:26:08.371 "transport_retry_count": 4, 00:26:08.371 "bdev_retry_count": 3, 00:26:08.371 "transport_ack_timeout": 0, 00:26:08.371 "ctrlr_loss_timeout_sec": 0, 00:26:08.371 "reconnect_delay_sec": 0, 00:26:08.371 "fast_io_fail_timeout_sec": 0, 00:26:08.371 "disable_auto_failback": false, 00:26:08.371 "generate_uuids": false, 00:26:08.371 "transport_tos": 0, 00:26:08.371 "nvme_error_stat": false, 00:26:08.371 "rdma_srq_size": 0, 00:26:08.371 "io_path_stat": false, 00:26:08.371 "allow_accel_sequence": false, 00:26:08.371 "rdma_max_cq_size": 0, 00:26:08.371 "rdma_cm_event_timeout_ms": 0, 00:26:08.371 "dhchap_digests": [ 00:26:08.371 "sha256", 00:26:08.371 "sha384", 00:26:08.371 "sha512" 00:26:08.371 ], 00:26:08.371 "dhchap_dhgroups": [ 00:26:08.371 "null", 00:26:08.371 "ffdhe2048", 00:26:08.371 "ffdhe3072", 00:26:08.371 "ffdhe4096", 00:26:08.371 "ffdhe6144", 00:26:08.371 "ffdhe8192" 00:26:08.371 ] 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "bdev_nvme_set_hotplug", 00:26:08.371 "params": { 00:26:08.371 "period_us": 100000, 00:26:08.371 "enable": false 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "bdev_malloc_create", 00:26:08.371 "params": { 00:26:08.371 "name": "malloc0", 00:26:08.371 "num_blocks": 8192, 00:26:08.371 "block_size": 4096, 00:26:08.371 "physical_block_size": 4096, 00:26:08.371 "uuid": "cc6ec6ad-7e1d-4101-bdaf-fe327d3df01e", 00:26:08.371 "optimal_io_boundary": 0, 00:26:08.371 "md_size": 0, 00:26:08.371 "dif_type": 0, 00:26:08.371 "dif_is_head_of_md": false, 00:26:08.371 "dif_pi_format": 0 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "bdev_wait_for_examine" 00:26:08.371 } 00:26:08.371 ] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "nbd", 00:26:08.371 "config": [] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "scheduler", 00:26:08.371 "config": [ 00:26:08.371 { 00:26:08.371 "method": "framework_set_scheduler", 00:26:08.371 "params": { 00:26:08.371 "name": "static" 00:26:08.371 } 00:26:08.371 } 00:26:08.371 ] 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "subsystem": "nvmf", 00:26:08.371 "config": [ 00:26:08.371 { 00:26:08.371 "method": "nvmf_set_config", 00:26:08.371 "params": { 00:26:08.371 "discovery_filter": "match_any", 00:26:08.371 "admin_cmd_passthru": { 00:26:08.371 "identify_ctrlr": false 00:26:08.371 }, 00:26:08.371 "dhchap_digests": [ 00:26:08.371 "sha256", 00:26:08.371 "sha384", 00:26:08.371 "sha512" 00:26:08.371 ], 00:26:08.371 "dhchap_dhgroups": [ 00:26:08.371 "null", 00:26:08.371 "ffdhe2048", 00:26:08.371 "ffdhe3072", 00:26:08.371 "ffdhe4096", 00:26:08.371 "ffdhe6144", 00:26:08.371 "ffdhe8192" 00:26:08.371 ] 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "nvmf_set_max_subsystems", 00:26:08.371 "params": { 00:26:08.371 "max_subsystems": 1024 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "nvmf_set_crdt", 00:26:08.371 "params": { 00:26:08.371 "crdt1": 0, 00:26:08.371 "crdt2": 0, 00:26:08.371 "crdt3": 0 00:26:08.371 } 00:26:08.371 }, 00:26:08.371 { 00:26:08.371 "method": "nvmf_create_transport", 00:26:08.371 "params": { 00:26:08.371 "trtype": "TCP", 00:26:08.371 "max_queue_depth": 128, 00:26:08.371 "max_io_qpairs_per_ctrlr": 127, 00:26:08.371 "in_capsule_data_size": 4096, 00:26:08.371 "max_io_size": 131072, 00:26:08.371 "io_unit_size": 131072, 00:26:08.371 "max_aq_depth": 128, 00:26:08.371 "num_shared_buffers": 511, 00:26:08.371 "buf_cache_size": 4294967295, 00:26:08.371 "dif_insert_or_strip": false, 00:26:08.372 "zcopy": false, 00:26:08.372 "c2h_success": false, 00:26:08.372 "sock_priority": 0, 00:26:08.372 "abort_timeout_sec": 1, 00:26:08.372 "ack_timeout": 0, 00:26:08.372 "data_wr_pool_size": 0 00:26:08.372 } 00:26:08.372 }, 00:26:08.372 { 00:26:08.372 "method": "nvmf_create_subsystem", 00:26:08.372 "params": { 00:26:08.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.372 "allow_any_host": false, 00:26:08.372 "serial_number": "SPDK00000000000001", 00:26:08.372 "model_number": "SPDK bdev Controller", 00:26:08.372 "max_namespaces": 10, 00:26:08.372 "min_cntlid": 1, 00:26:08.372 "max_cntlid": 65519, 00:26:08.372 "ana_reporting": false 00:26:08.372 } 00:26:08.372 }, 00:26:08.372 { 00:26:08.372 "method": "nvmf_subsystem_add_host", 00:26:08.372 "params": { 00:26:08.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.372 "host": "nqn.2016-06.io.spdk:host1", 00:26:08.372 "psk": "key0" 00:26:08.372 } 00:26:08.372 }, 00:26:08.372 { 00:26:08.372 "method": "nvmf_subsystem_add_ns", 00:26:08.372 "params": { 00:26:08.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.372 "namespace": { 00:26:08.372 "nsid": 1, 00:26:08.372 "bdev_name": "malloc0", 00:26:08.372 "nguid": "CC6EC6AD7E1D4101BDAFFE327D3DF01E", 00:26:08.372 "uuid": "cc6ec6ad-7e1d-4101-bdaf-fe327d3df01e", 00:26:08.372 "no_auto_visible": false 00:26:08.372 } 00:26:08.372 } 00:26:08.372 }, 00:26:08.372 { 00:26:08.372 "method": "nvmf_subsystem_add_listener", 00:26:08.372 "params": { 00:26:08.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.372 "listen_address": { 00:26:08.372 "trtype": "TCP", 00:26:08.372 "adrfam": "IPv4", 00:26:08.372 "traddr": "10.0.0.2", 00:26:08.372 "trsvcid": "4420" 00:26:08.372 }, 00:26:08.372 "secure_channel": true 00:26:08.372 } 00:26:08.372 } 00:26:08.372 ] 00:26:08.372 } 00:26:08.372 ] 00:26:08.372 }' 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3198138 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3198138 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3198138 ']' 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:08.372 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.372 [2024-11-05 16:50:15.315494] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:08.372 [2024-11-05 16:50:15.315551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.372 [2024-11-05 16:50:15.404300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.372 [2024-11-05 16:50:15.432727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.372 [2024-11-05 16:50:15.432758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.372 [2024-11-05 16:50:15.432763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.372 [2024-11-05 16:50:15.432771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.372 [2024-11-05 16:50:15.432775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.372 [2024-11-05 16:50:15.433241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.632 [2024-11-05 16:50:15.625566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.632 [2024-11-05 16:50:15.657593] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:08.632 [2024-11-05 16:50:15.657804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3198192 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3198192 /var/tmp/bdevperf.sock 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3198192 ']' 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:09.201 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:09.202 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:09.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:09.202 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:09.202 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:09.202 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.202 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:26:09.202 "subsystems": [ 00:26:09.202 { 00:26:09.202 "subsystem": "keyring", 00:26:09.202 "config": [ 00:26:09.202 { 00:26:09.202 "method": "keyring_file_add_key", 00:26:09.202 "params": { 00:26:09.202 "name": "key0", 00:26:09.202 "path": "/tmp/tmp.kLnevLjGsF" 00:26:09.202 } 00:26:09.202 } 00:26:09.202 ] 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "subsystem": "iobuf", 00:26:09.202 "config": [ 00:26:09.202 { 00:26:09.202 "method": "iobuf_set_options", 00:26:09.202 "params": { 00:26:09.202 "small_pool_count": 8192, 00:26:09.202 "large_pool_count": 1024, 00:26:09.202 "small_bufsize": 8192, 00:26:09.202 "large_bufsize": 135168, 00:26:09.202 "enable_numa": false 00:26:09.202 } 00:26:09.202 } 00:26:09.202 ] 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "subsystem": "sock", 00:26:09.202 "config": [ 00:26:09.202 { 00:26:09.202 "method": "sock_set_default_impl", 00:26:09.202 "params": { 00:26:09.202 "impl_name": "posix" 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "sock_impl_set_options", 00:26:09.202 "params": { 00:26:09.202 "impl_name": "ssl", 00:26:09.202 "recv_buf_size": 4096, 00:26:09.202 "send_buf_size": 4096, 00:26:09.202 "enable_recv_pipe": true, 00:26:09.202 "enable_quickack": false, 00:26:09.202 "enable_placement_id": 0, 00:26:09.202 "enable_zerocopy_send_server": true, 00:26:09.202 "enable_zerocopy_send_client": false, 00:26:09.202 "zerocopy_threshold": 0, 00:26:09.202 "tls_version": 0, 00:26:09.202 "enable_ktls": false 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "sock_impl_set_options", 00:26:09.202 "params": { 00:26:09.202 "impl_name": "posix", 00:26:09.202 "recv_buf_size": 2097152, 00:26:09.202 "send_buf_size": 2097152, 00:26:09.202 "enable_recv_pipe": true, 00:26:09.202 "enable_quickack": false, 00:26:09.202 "enable_placement_id": 0, 00:26:09.202 "enable_zerocopy_send_server": true, 00:26:09.202 "enable_zerocopy_send_client": false, 00:26:09.202 "zerocopy_threshold": 0, 00:26:09.202 "tls_version": 0, 00:26:09.202 "enable_ktls": false 00:26:09.202 } 00:26:09.202 } 00:26:09.202 ] 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "subsystem": "vmd", 00:26:09.202 "config": [] 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "subsystem": "accel", 00:26:09.202 "config": [ 00:26:09.202 { 00:26:09.202 "method": "accel_set_options", 00:26:09.202 "params": { 00:26:09.202 "small_cache_size": 128, 00:26:09.202 "large_cache_size": 16, 00:26:09.202 "task_count": 2048, 00:26:09.202 "sequence_count": 2048, 00:26:09.202 "buf_count": 2048 00:26:09.202 } 00:26:09.202 } 00:26:09.202 ] 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "subsystem": "bdev", 00:26:09.202 "config": [ 00:26:09.202 { 00:26:09.202 "method": "bdev_set_options", 00:26:09.202 "params": { 00:26:09.202 "bdev_io_pool_size": 65535, 00:26:09.202 "bdev_io_cache_size": 256, 00:26:09.202 "bdev_auto_examine": true, 00:26:09.202 "iobuf_small_cache_size": 128, 00:26:09.202 "iobuf_large_cache_size": 16 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "bdev_raid_set_options", 00:26:09.202 "params": { 00:26:09.202 "process_window_size_kb": 1024, 00:26:09.202 "process_max_bandwidth_mb_sec": 0 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "bdev_iscsi_set_options", 00:26:09.202 "params": { 00:26:09.202 "timeout_sec": 30 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "bdev_nvme_set_options", 00:26:09.202 "params": { 00:26:09.202 "action_on_timeout": "none", 00:26:09.202 "timeout_us": 0, 00:26:09.202 "timeout_admin_us": 0, 00:26:09.202 "keep_alive_timeout_ms": 10000, 00:26:09.202 "arbitration_burst": 0, 00:26:09.202 "low_priority_weight": 0, 00:26:09.202 "medium_priority_weight": 0, 00:26:09.202 "high_priority_weight": 0, 00:26:09.202 "nvme_adminq_poll_period_us": 10000, 00:26:09.202 "nvme_ioq_poll_period_us": 0, 00:26:09.202 "io_queue_requests": 512, 00:26:09.202 "delay_cmd_submit": true, 00:26:09.202 "transport_retry_count": 4, 00:26:09.202 "bdev_retry_count": 3, 00:26:09.202 "transport_ack_timeout": 0, 00:26:09.202 "ctrlr_loss_timeout_sec": 0, 00:26:09.202 "reconnect_delay_sec": 0, 00:26:09.202 "fast_io_fail_timeout_sec": 0, 00:26:09.202 "disable_auto_failback": false, 00:26:09.202 "generate_uuids": false, 00:26:09.202 "transport_tos": 0, 00:26:09.202 "nvme_error_stat": false, 00:26:09.202 "rdma_srq_size": 0, 00:26:09.202 "io_path_stat": false, 00:26:09.202 "allow_accel_sequence": false, 00:26:09.202 "rdma_max_cq_size": 0, 00:26:09.202 "rdma_cm_event_timeout_ms": 0, 00:26:09.202 "dhchap_digests": [ 00:26:09.202 "sha256", 00:26:09.202 "sha384", 00:26:09.202 "sha512" 00:26:09.202 ], 00:26:09.202 "dhchap_dhgroups": [ 00:26:09.202 "null", 00:26:09.202 "ffdhe2048", 00:26:09.202 "ffdhe3072", 00:26:09.202 "ffdhe4096", 00:26:09.202 "ffdhe6144", 00:26:09.202 "ffdhe8192" 00:26:09.202 ] 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "bdev_nvme_attach_controller", 00:26:09.202 "params": { 00:26:09.202 "name": "TLSTEST", 00:26:09.202 "trtype": "TCP", 00:26:09.202 "adrfam": "IPv4", 00:26:09.202 "traddr": "10.0.0.2", 00:26:09.202 "trsvcid": "4420", 00:26:09.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.202 "prchk_reftag": false, 00:26:09.202 "prchk_guard": false, 00:26:09.202 "ctrlr_loss_timeout_sec": 0, 00:26:09.202 "reconnect_delay_sec": 0, 00:26:09.202 "fast_io_fail_timeout_sec": 0, 00:26:09.202 "psk": "key0", 00:26:09.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:09.202 "hdgst": false, 00:26:09.202 "ddgst": false, 00:26:09.202 "multipath": "multipath" 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "bdev_nvme_set_hotplug", 00:26:09.202 "params": { 00:26:09.202 "period_us": 100000, 00:26:09.202 "enable": false 00:26:09.202 } 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "method": "bdev_wait_for_examine" 00:26:09.202 } 00:26:09.202 ] 00:26:09.202 }, 00:26:09.202 { 00:26:09.202 "subsystem": "nbd", 00:26:09.202 "config": [] 00:26:09.202 } 00:26:09.202 ] 00:26:09.202 }' 00:26:09.202 [2024-11-05 16:50:16.189797] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:09.202 [2024-11-05 16:50:16.189850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198192 ] 00:26:09.202 [2024-11-05 16:50:16.247059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.463 [2024-11-05 16:50:16.276047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.463 [2024-11-05 16:50:16.409886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:10.032 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:10.032 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:10.032 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:10.032 Running I/O for 10 seconds... 00:26:12.351 6169.00 IOPS, 24.10 MiB/s [2024-11-05T15:50:20.351Z] 6279.50 IOPS, 24.53 MiB/s [2024-11-05T15:50:21.290Z] 6298.00 IOPS, 24.60 MiB/s [2024-11-05T15:50:22.229Z] 6304.50 IOPS, 24.63 MiB/s [2024-11-05T15:50:23.183Z] 6118.60 IOPS, 23.90 MiB/s [2024-11-05T15:50:24.127Z] 6188.17 IOPS, 24.17 MiB/s [2024-11-05T15:50:25.508Z] 6166.71 IOPS, 24.09 MiB/s [2024-11-05T15:50:26.447Z] 6039.75 IOPS, 23.59 MiB/s [2024-11-05T15:50:27.386Z] 5947.78 IOPS, 23.23 MiB/s [2024-11-05T15:50:27.386Z] 5864.20 IOPS, 22.91 MiB/s 00:26:20.323 Latency(us) 00:26:20.323 [2024-11-05T15:50:27.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.323 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:20.323 Verification LBA range: start 0x0 length 0x2000 00:26:20.323 TLSTESTn1 : 10.02 5865.46 22.91 0.00 0.00 21786.23 4642.13 23265.28 00:26:20.323 [2024-11-05T15:50:27.386Z] =================================================================================================================== 00:26:20.323 [2024-11-05T15:50:27.386Z] Total : 5865.46 22.91 0.00 0.00 21786.23 4642.13 23265.28 00:26:20.323 { 00:26:20.323 "results": [ 00:26:20.323 { 00:26:20.323 "job": "TLSTESTn1", 00:26:20.323 "core_mask": "0x4", 00:26:20.323 "workload": "verify", 00:26:20.323 "status": "finished", 00:26:20.323 "verify_range": { 00:26:20.323 "start": 0, 00:26:20.323 "length": 8192 00:26:20.323 }, 00:26:20.323 "queue_depth": 128, 00:26:20.323 "io_size": 4096, 00:26:20.323 "runtime": 10.019499, 00:26:20.323 "iops": 5865.462933825334, 00:26:20.323 "mibps": 22.911964585255213, 00:26:20.323 "io_failed": 0, 00:26:20.323 "io_timeout": 0, 00:26:20.323 "avg_latency_us": 21786.22967981986, 00:26:20.323 "min_latency_us": 4642.133333333333, 00:26:20.323 "max_latency_us": 23265.28 00:26:20.323 } 00:26:20.323 ], 00:26:20.323 "core_count": 1 00:26:20.323 } 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3198192 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3198192 ']' 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3198192 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3198192 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3198192' 00:26:20.323 killing process with pid 3198192 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3198192 00:26:20.323 Received shutdown signal, test time was about 10.000000 seconds 00:26:20.323 00:26:20.323 Latency(us) 00:26:20.323 [2024-11-05T15:50:27.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.323 [2024-11-05T15:50:27.386Z] =================================================================================================================== 00:26:20.323 [2024-11-05T15:50:27.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3198192 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3198138 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3198138 ']' 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3198138 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3198138 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3198138' 00:26:20.323 killing process with pid 3198138 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3198138 00:26:20.323 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3198138 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3200527 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3200527 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3200527 ']' 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:20.584 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:20.584 [2024-11-05 16:50:27.528326] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:20.584 [2024-11-05 16:50:27.528390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.584 [2024-11-05 16:50:27.604713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.584 [2024-11-05 16:50:27.640966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.584 [2024-11-05 16:50:27.641000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.584 [2024-11-05 16:50:27.641008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.584 [2024-11-05 16:50:27.641015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.584 [2024-11-05 16:50:27.641021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.584 [2024-11-05 16:50:27.641595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.kLnevLjGsF 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kLnevLjGsF 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:21.523 [2024-11-05 16:50:28.502053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.523 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:21.783 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:22.042 [2024-11-05 16:50:28.858948] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:22.042 [2024-11-05 16:50:28.859187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.042 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:22.043 malloc0 00:26:22.043 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:22.303 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3200896 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3200896 /var/tmp/bdevperf.sock 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3200896 ']' 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:22.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:22.563 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:22.823 [2024-11-05 16:50:29.650174] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:22.824 [2024-11-05 16:50:29.650227] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200896 ] 00:26:22.824 [2024-11-05 16:50:29.732433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.824 [2024-11-05 16:50:29.761716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.824 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.824 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:22.824 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:23.084 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:23.084 [2024-11-05 16:50:30.135575] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:23.344 nvme0n1 00:26:23.344 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:23.344 Running I/O for 1 seconds... 00:26:24.285 4500.00 IOPS, 17.58 MiB/s 00:26:24.285 Latency(us) 00:26:24.285 [2024-11-05T15:50:31.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.285 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:24.285 Verification LBA range: start 0x0 length 0x2000 00:26:24.285 nvme0n1 : 1.01 4561.28 17.82 0.00 0.00 27872.50 6498.99 49588.91 00:26:24.285 [2024-11-05T15:50:31.349Z] =================================================================================================================== 00:26:24.286 [2024-11-05T15:50:31.349Z] Total : 4561.28 17.82 0.00 0.00 27872.50 6498.99 49588.91 00:26:24.286 { 00:26:24.286 "results": [ 00:26:24.286 { 00:26:24.286 "job": "nvme0n1", 00:26:24.286 "core_mask": "0x2", 00:26:24.286 "workload": "verify", 00:26:24.286 "status": "finished", 00:26:24.286 "verify_range": { 00:26:24.286 "start": 0, 00:26:24.286 "length": 8192 00:26:24.286 }, 00:26:24.286 "queue_depth": 128, 00:26:24.286 "io_size": 4096, 00:26:24.286 "runtime": 1.014627, 00:26:24.286 "iops": 4561.282126338053, 00:26:24.286 "mibps": 17.81750830600802, 00:26:24.286 "io_failed": 0, 00:26:24.286 "io_timeout": 0, 00:26:24.286 "avg_latency_us": 27872.495257850765, 00:26:24.286 "min_latency_us": 6498.986666666667, 00:26:24.286 "max_latency_us": 49588.90666666667 00:26:24.286 } 00:26:24.286 ], 00:26:24.286 "core_count": 1 00:26:24.286 } 00:26:24.286 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3200896 00:26:24.286 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3200896 ']' 00:26:24.286 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3200896 00:26:24.286 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:24.286 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.286 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200896 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200896' 00:26:24.546 killing process with pid 3200896 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3200896 00:26:24.546 Received shutdown signal, test time was about 1.000000 seconds 00:26:24.546 00:26:24.546 Latency(us) 00:26:24.546 [2024-11-05T15:50:31.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.546 [2024-11-05T15:50:31.609Z] =================================================================================================================== 00:26:24.546 [2024-11-05T15:50:31.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3200896 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3200527 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3200527 ']' 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3200527 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200527 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200527' 00:26:24.546 killing process with pid 3200527 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3200527 00:26:24.546 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3200527 00:26:24.806 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3201250 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3201250 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3201250 ']' 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:24.807 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.807 [2024-11-05 16:50:31.754165] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:24.807 [2024-11-05 16:50:31.754218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.807 [2024-11-05 16:50:31.831657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.807 [2024-11-05 16:50:31.865206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.807 [2024-11-05 16:50:31.865243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.807 [2024-11-05 16:50:31.865252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.807 [2024-11-05 16:50:31.865258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.807 [2024-11-05 16:50:31.865264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.807 [2024-11-05 16:50:31.865835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.067 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:25.067 [2024-11-05 16:50:32.004351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.067 malloc0 00:26:25.067 [2024-11-05 16:50:32.031051] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:25.067 [2024-11-05 16:50:32.031286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3201325 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3201325 /var/tmp/bdevperf.sock 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3201325 ']' 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:25.067 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:25.068 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:25.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:25.068 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:25.068 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:25.068 [2024-11-05 16:50:32.109268] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:25.068 [2024-11-05 16:50:32.109316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201325 ] 00:26:25.328 [2024-11-05 16:50:32.193244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.328 [2024-11-05 16:50:32.223431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.897 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.897 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:25.897 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kLnevLjGsF 00:26:26.158 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:26.418 [2024-11-05 16:50:33.247125] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:26.418 nvme0n1 00:26:26.418 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:26.418 Running I/O for 1 seconds... 00:26:27.642 5557.00 IOPS, 21.71 MiB/s 00:26:27.642 Latency(us) 00:26:27.642 [2024-11-05T15:50:34.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.642 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:27.642 Verification LBA range: start 0x0 length 0x2000 00:26:27.642 nvme0n1 : 1.02 5568.16 21.75 0.00 0.00 22759.43 4560.21 27962.03 00:26:27.642 [2024-11-05T15:50:34.705Z] =================================================================================================================== 00:26:27.642 [2024-11-05T15:50:34.705Z] Total : 5568.16 21.75 0.00 0.00 22759.43 4560.21 27962.03 00:26:27.642 { 00:26:27.642 "results": [ 00:26:27.642 { 00:26:27.642 "job": "nvme0n1", 00:26:27.642 "core_mask": "0x2", 00:26:27.642 "workload": "verify", 00:26:27.642 "status": "finished", 00:26:27.642 "verify_range": { 00:26:27.642 "start": 0, 00:26:27.642 "length": 8192 00:26:27.642 }, 00:26:27.642 "queue_depth": 128, 00:26:27.642 "io_size": 4096, 00:26:27.642 "runtime": 1.020983, 00:26:27.642 "iops": 5568.16323092549, 00:26:27.642 "mibps": 21.750637620802696, 00:26:27.642 "io_failed": 0, 00:26:27.642 "io_timeout": 0, 00:26:27.642 "avg_latency_us": 22759.427002052184, 00:26:27.642 "min_latency_us": 4560.213333333333, 00:26:27.642 "max_latency_us": 27962.02666666667 00:26:27.642 } 00:26:27.642 ], 00:26:27.642 "core_count": 1 00:26:27.642 } 00:26:27.642 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:26:27.642 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.642 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.642 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.642 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:26:27.642 "subsystems": [ 00:26:27.642 { 00:26:27.642 "subsystem": "keyring", 00:26:27.642 "config": [ 00:26:27.642 { 00:26:27.642 "method": "keyring_file_add_key", 00:26:27.642 "params": { 00:26:27.642 "name": "key0", 00:26:27.642 "path": "/tmp/tmp.kLnevLjGsF" 00:26:27.642 } 00:26:27.642 } 00:26:27.642 ] 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "subsystem": "iobuf", 00:26:27.642 "config": [ 00:26:27.642 { 00:26:27.642 "method": "iobuf_set_options", 00:26:27.642 "params": { 00:26:27.642 "small_pool_count": 8192, 00:26:27.642 "large_pool_count": 1024, 00:26:27.642 "small_bufsize": 8192, 00:26:27.642 "large_bufsize": 135168, 00:26:27.642 "enable_numa": false 00:26:27.642 } 00:26:27.642 } 00:26:27.642 ] 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "subsystem": "sock", 00:26:27.642 "config": [ 00:26:27.642 { 00:26:27.642 "method": "sock_set_default_impl", 00:26:27.642 "params": { 00:26:27.642 "impl_name": "posix" 00:26:27.642 } 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "method": "sock_impl_set_options", 00:26:27.642 "params": { 00:26:27.642 "impl_name": "ssl", 00:26:27.642 "recv_buf_size": 4096, 00:26:27.642 "send_buf_size": 4096, 00:26:27.642 "enable_recv_pipe": true, 00:26:27.642 "enable_quickack": false, 00:26:27.642 "enable_placement_id": 0, 00:26:27.642 "enable_zerocopy_send_server": true, 00:26:27.642 "enable_zerocopy_send_client": false, 00:26:27.642 "zerocopy_threshold": 0, 00:26:27.642 "tls_version": 0, 00:26:27.642 "enable_ktls": false 00:26:27.642 } 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "method": "sock_impl_set_options", 00:26:27.642 "params": { 00:26:27.642 "impl_name": "posix", 00:26:27.642 "recv_buf_size": 2097152, 00:26:27.642 "send_buf_size": 2097152, 00:26:27.642 "enable_recv_pipe": true, 00:26:27.642 "enable_quickack": false, 00:26:27.642 "enable_placement_id": 0, 00:26:27.642 "enable_zerocopy_send_server": true, 00:26:27.642 "enable_zerocopy_send_client": false, 00:26:27.642 "zerocopy_threshold": 0, 00:26:27.642 "tls_version": 0, 00:26:27.642 "enable_ktls": false 00:26:27.642 } 00:26:27.642 } 00:26:27.642 ] 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "subsystem": "vmd", 00:26:27.642 "config": [] 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "subsystem": "accel", 00:26:27.642 "config": [ 00:26:27.642 { 00:26:27.642 "method": "accel_set_options", 00:26:27.642 "params": { 00:26:27.642 "small_cache_size": 128, 00:26:27.642 "large_cache_size": 16, 00:26:27.642 "task_count": 2048, 00:26:27.642 "sequence_count": 2048, 00:26:27.642 "buf_count": 2048 00:26:27.642 } 00:26:27.642 } 00:26:27.642 ] 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "subsystem": "bdev", 00:26:27.642 "config": [ 00:26:27.642 { 00:26:27.642 "method": "bdev_set_options", 00:26:27.642 "params": { 00:26:27.642 "bdev_io_pool_size": 65535, 00:26:27.642 "bdev_io_cache_size": 256, 00:26:27.642 "bdev_auto_examine": true, 00:26:27.642 "iobuf_small_cache_size": 128, 00:26:27.642 "iobuf_large_cache_size": 16 00:26:27.642 } 00:26:27.642 }, 00:26:27.642 { 00:26:27.642 "method": "bdev_raid_set_options", 00:26:27.642 "params": { 00:26:27.642 "process_window_size_kb": 1024, 00:26:27.643 "process_max_bandwidth_mb_sec": 0 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "bdev_iscsi_set_options", 00:26:27.643 "params": { 00:26:27.643 "timeout_sec": 30 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "bdev_nvme_set_options", 00:26:27.643 "params": { 00:26:27.643 "action_on_timeout": "none", 00:26:27.643 "timeout_us": 0, 00:26:27.643 "timeout_admin_us": 0, 00:26:27.643 "keep_alive_timeout_ms": 10000, 00:26:27.643 "arbitration_burst": 0, 00:26:27.643 "low_priority_weight": 0, 00:26:27.643 "medium_priority_weight": 0, 00:26:27.643 "high_priority_weight": 0, 00:26:27.643 "nvme_adminq_poll_period_us": 10000, 00:26:27.643 "nvme_ioq_poll_period_us": 0, 00:26:27.643 "io_queue_requests": 0, 00:26:27.643 "delay_cmd_submit": true, 00:26:27.643 "transport_retry_count": 4, 00:26:27.643 "bdev_retry_count": 3, 00:26:27.643 "transport_ack_timeout": 0, 00:26:27.643 "ctrlr_loss_timeout_sec": 0, 00:26:27.643 "reconnect_delay_sec": 0, 00:26:27.643 "fast_io_fail_timeout_sec": 0, 00:26:27.643 "disable_auto_failback": false, 00:26:27.643 "generate_uuids": false, 00:26:27.643 "transport_tos": 0, 00:26:27.643 "nvme_error_stat": false, 00:26:27.643 "rdma_srq_size": 0, 00:26:27.643 "io_path_stat": false, 00:26:27.643 "allow_accel_sequence": false, 00:26:27.643 "rdma_max_cq_size": 0, 00:26:27.643 "rdma_cm_event_timeout_ms": 0, 00:26:27.643 "dhchap_digests": [ 00:26:27.643 "sha256", 00:26:27.643 "sha384", 00:26:27.643 "sha512" 00:26:27.643 ], 00:26:27.643 "dhchap_dhgroups": [ 00:26:27.643 "null", 00:26:27.643 "ffdhe2048", 00:26:27.643 "ffdhe3072", 00:26:27.643 "ffdhe4096", 00:26:27.643 "ffdhe6144", 00:26:27.643 "ffdhe8192" 00:26:27.643 ] 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "bdev_nvme_set_hotplug", 00:26:27.643 "params": { 00:26:27.643 "period_us": 100000, 00:26:27.643 "enable": false 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "bdev_malloc_create", 00:26:27.643 "params": { 00:26:27.643 "name": "malloc0", 00:26:27.643 "num_blocks": 8192, 00:26:27.643 "block_size": 4096, 00:26:27.643 "physical_block_size": 4096, 00:26:27.643 "uuid": "68dce871-e158-4f02-a42f-1ce51d0224cb", 00:26:27.643 "optimal_io_boundary": 0, 00:26:27.643 "md_size": 0, 00:26:27.643 "dif_type": 0, 00:26:27.643 "dif_is_head_of_md": false, 00:26:27.643 "dif_pi_format": 0 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "bdev_wait_for_examine" 00:26:27.643 } 00:26:27.643 ] 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "subsystem": "nbd", 00:26:27.643 "config": [] 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "subsystem": "scheduler", 00:26:27.643 "config": [ 00:26:27.643 { 00:26:27.643 "method": "framework_set_scheduler", 00:26:27.643 "params": { 00:26:27.643 "name": "static" 00:26:27.643 } 00:26:27.643 } 00:26:27.643 ] 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "subsystem": "nvmf", 00:26:27.643 "config": [ 00:26:27.643 { 00:26:27.643 "method": "nvmf_set_config", 00:26:27.643 "params": { 00:26:27.643 "discovery_filter": "match_any", 00:26:27.643 "admin_cmd_passthru": { 00:26:27.643 "identify_ctrlr": false 00:26:27.643 }, 00:26:27.643 "dhchap_digests": [ 00:26:27.643 "sha256", 00:26:27.643 "sha384", 00:26:27.643 "sha512" 00:26:27.643 ], 00:26:27.643 "dhchap_dhgroups": [ 00:26:27.643 "null", 00:26:27.643 "ffdhe2048", 00:26:27.643 "ffdhe3072", 00:26:27.643 "ffdhe4096", 00:26:27.643 "ffdhe6144", 00:26:27.643 "ffdhe8192" 00:26:27.643 ] 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_set_max_subsystems", 00:26:27.643 "params": { 00:26:27.643 "max_subsystems": 1024 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_set_crdt", 00:26:27.643 "params": { 00:26:27.643 "crdt1": 0, 00:26:27.643 "crdt2": 0, 00:26:27.643 "crdt3": 0 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_create_transport", 00:26:27.643 "params": { 00:26:27.643 "trtype": "TCP", 00:26:27.643 "max_queue_depth": 128, 00:26:27.643 "max_io_qpairs_per_ctrlr": 127, 00:26:27.643 "in_capsule_data_size": 4096, 00:26:27.643 "max_io_size": 131072, 00:26:27.643 "io_unit_size": 131072, 00:26:27.643 "max_aq_depth": 128, 00:26:27.643 "num_shared_buffers": 511, 00:26:27.643 "buf_cache_size": 4294967295, 00:26:27.643 "dif_insert_or_strip": false, 00:26:27.643 "zcopy": false, 00:26:27.643 "c2h_success": false, 00:26:27.643 "sock_priority": 0, 00:26:27.643 "abort_timeout_sec": 1, 00:26:27.643 "ack_timeout": 0, 00:26:27.643 "data_wr_pool_size": 0 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_create_subsystem", 00:26:27.643 "params": { 00:26:27.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.643 "allow_any_host": false, 00:26:27.643 "serial_number": "00000000000000000000", 00:26:27.643 "model_number": "SPDK bdev Controller", 00:26:27.643 "max_namespaces": 32, 00:26:27.643 "min_cntlid": 1, 00:26:27.643 "max_cntlid": 65519, 00:26:27.643 "ana_reporting": false 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_subsystem_add_host", 00:26:27.643 "params": { 00:26:27.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.643 "host": "nqn.2016-06.io.spdk:host1", 00:26:27.643 "psk": "key0" 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_subsystem_add_ns", 00:26:27.643 "params": { 00:26:27.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.643 "namespace": { 00:26:27.643 "nsid": 1, 00:26:27.643 "bdev_name": "malloc0", 00:26:27.643 "nguid": "68DCE871E1584F02A42F1CE51D0224CB", 00:26:27.643 "uuid": "68dce871-e158-4f02-a42f-1ce51d0224cb", 00:26:27.643 "no_auto_visible": false 00:26:27.643 } 00:26:27.643 } 00:26:27.643 }, 00:26:27.643 { 00:26:27.643 "method": "nvmf_subsystem_add_listener", 00:26:27.643 "params": { 00:26:27.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.643 "listen_address": { 00:26:27.643 "trtype": "TCP", 00:26:27.643 "adrfam": "IPv4", 00:26:27.643 "traddr": "10.0.0.2", 00:26:27.643 "trsvcid": "4420" 00:26:27.643 }, 00:26:27.643 "secure_channel": false, 00:26:27.643 "sock_impl": "ssl" 00:26:27.643 } 00:26:27.643 } 00:26:27.643 ] 00:26:27.643 } 00:26:27.643 ] 00:26:27.643 }' 00:26:27.643 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:27.904 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:26:27.904 "subsystems": [ 00:26:27.904 { 00:26:27.904 "subsystem": "keyring", 00:26:27.904 "config": [ 00:26:27.904 { 00:26:27.904 "method": "keyring_file_add_key", 00:26:27.904 "params": { 00:26:27.904 "name": "key0", 00:26:27.904 "path": "/tmp/tmp.kLnevLjGsF" 00:26:27.904 } 00:26:27.904 } 00:26:27.904 ] 00:26:27.904 }, 00:26:27.904 { 00:26:27.904 "subsystem": "iobuf", 00:26:27.904 "config": [ 00:26:27.904 { 00:26:27.904 "method": "iobuf_set_options", 00:26:27.904 "params": { 00:26:27.904 "small_pool_count": 8192, 00:26:27.904 "large_pool_count": 1024, 00:26:27.904 "small_bufsize": 8192, 00:26:27.904 "large_bufsize": 135168, 00:26:27.904 "enable_numa": false 00:26:27.904 } 00:26:27.904 } 00:26:27.904 ] 00:26:27.904 }, 00:26:27.904 { 00:26:27.904 "subsystem": "sock", 00:26:27.904 "config": [ 00:26:27.904 { 00:26:27.904 "method": "sock_set_default_impl", 00:26:27.904 "params": { 00:26:27.904 "impl_name": "posix" 00:26:27.904 } 00:26:27.904 }, 00:26:27.904 { 00:26:27.904 "method": "sock_impl_set_options", 00:26:27.904 "params": { 00:26:27.904 "impl_name": "ssl", 00:26:27.904 "recv_buf_size": 4096, 00:26:27.904 "send_buf_size": 4096, 00:26:27.904 "enable_recv_pipe": true, 00:26:27.904 "enable_quickack": false, 00:26:27.904 "enable_placement_id": 0, 00:26:27.904 "enable_zerocopy_send_server": true, 00:26:27.904 "enable_zerocopy_send_client": false, 00:26:27.904 "zerocopy_threshold": 0, 00:26:27.904 "tls_version": 0, 00:26:27.904 "enable_ktls": false 00:26:27.904 } 00:26:27.904 }, 00:26:27.904 { 00:26:27.904 "method": "sock_impl_set_options", 00:26:27.904 "params": { 00:26:27.904 "impl_name": "posix", 00:26:27.904 "recv_buf_size": 2097152, 00:26:27.904 "send_buf_size": 2097152, 00:26:27.904 "enable_recv_pipe": true, 00:26:27.904 "enable_quickack": false, 00:26:27.904 "enable_placement_id": 0, 00:26:27.904 "enable_zerocopy_send_server": true, 00:26:27.904 "enable_zerocopy_send_client": false, 00:26:27.904 "zerocopy_threshold": 0, 00:26:27.904 "tls_version": 0, 00:26:27.904 "enable_ktls": false 00:26:27.904 } 00:26:27.904 } 00:26:27.904 ] 00:26:27.904 }, 00:26:27.904 { 00:26:27.904 "subsystem": "vmd", 00:26:27.904 "config": [] 00:26:27.904 }, 00:26:27.904 { 00:26:27.904 "subsystem": "accel", 00:26:27.905 "config": [ 00:26:27.905 { 00:26:27.905 "method": "accel_set_options", 00:26:27.905 "params": { 00:26:27.905 "small_cache_size": 128, 00:26:27.905 "large_cache_size": 16, 00:26:27.905 "task_count": 2048, 00:26:27.905 "sequence_count": 2048, 00:26:27.905 "buf_count": 2048 00:26:27.905 } 00:26:27.905 } 00:26:27.905 ] 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "subsystem": "bdev", 00:26:27.905 "config": [ 00:26:27.905 { 00:26:27.905 "method": "bdev_set_options", 00:26:27.905 "params": { 00:26:27.905 "bdev_io_pool_size": 65535, 00:26:27.905 "bdev_io_cache_size": 256, 00:26:27.905 "bdev_auto_examine": true, 00:26:27.905 "iobuf_small_cache_size": 128, 00:26:27.905 "iobuf_large_cache_size": 16 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_raid_set_options", 00:26:27.905 "params": { 00:26:27.905 "process_window_size_kb": 1024, 00:26:27.905 "process_max_bandwidth_mb_sec": 0 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_iscsi_set_options", 00:26:27.905 "params": { 00:26:27.905 "timeout_sec": 30 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_nvme_set_options", 00:26:27.905 "params": { 00:26:27.905 "action_on_timeout": "none", 00:26:27.905 "timeout_us": 0, 00:26:27.905 "timeout_admin_us": 0, 00:26:27.905 "keep_alive_timeout_ms": 10000, 00:26:27.905 "arbitration_burst": 0, 00:26:27.905 "low_priority_weight": 0, 00:26:27.905 "medium_priority_weight": 0, 00:26:27.905 "high_priority_weight": 0, 00:26:27.905 "nvme_adminq_poll_period_us": 10000, 00:26:27.905 "nvme_ioq_poll_period_us": 0, 00:26:27.905 "io_queue_requests": 512, 00:26:27.905 "delay_cmd_submit": true, 00:26:27.905 "transport_retry_count": 4, 00:26:27.905 "bdev_retry_count": 3, 00:26:27.905 "transport_ack_timeout": 0, 00:26:27.905 "ctrlr_loss_timeout_sec": 0, 00:26:27.905 "reconnect_delay_sec": 0, 00:26:27.905 "fast_io_fail_timeout_sec": 0, 00:26:27.905 "disable_auto_failback": false, 00:26:27.905 "generate_uuids": false, 00:26:27.905 "transport_tos": 0, 00:26:27.905 "nvme_error_stat": false, 00:26:27.905 "rdma_srq_size": 0, 00:26:27.905 "io_path_stat": false, 00:26:27.905 "allow_accel_sequence": false, 00:26:27.905 "rdma_max_cq_size": 0, 00:26:27.905 "rdma_cm_event_timeout_ms": 0, 00:26:27.905 "dhchap_digests": [ 00:26:27.905 "sha256", 00:26:27.905 "sha384", 00:26:27.905 "sha512" 00:26:27.905 ], 00:26:27.905 "dhchap_dhgroups": [ 00:26:27.905 "null", 00:26:27.905 "ffdhe2048", 00:26:27.905 "ffdhe3072", 00:26:27.905 "ffdhe4096", 00:26:27.905 "ffdhe6144", 00:26:27.905 "ffdhe8192" 00:26:27.905 ] 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_nvme_attach_controller", 00:26:27.905 "params": { 00:26:27.905 "name": "nvme0", 00:26:27.905 "trtype": "TCP", 00:26:27.905 "adrfam": "IPv4", 00:26:27.905 "traddr": "10.0.0.2", 00:26:27.905 "trsvcid": "4420", 00:26:27.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.905 "prchk_reftag": false, 00:26:27.905 "prchk_guard": false, 00:26:27.905 "ctrlr_loss_timeout_sec": 0, 00:26:27.905 "reconnect_delay_sec": 0, 00:26:27.905 "fast_io_fail_timeout_sec": 0, 00:26:27.905 "psk": "key0", 00:26:27.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.905 "hdgst": false, 00:26:27.905 "ddgst": false, 00:26:27.905 "multipath": "multipath" 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_nvme_set_hotplug", 00:26:27.905 "params": { 00:26:27.905 "period_us": 100000, 00:26:27.905 "enable": false 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_enable_histogram", 00:26:27.905 "params": { 00:26:27.905 "name": "nvme0n1", 00:26:27.905 "enable": true 00:26:27.905 } 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "method": "bdev_wait_for_examine" 00:26:27.905 } 00:26:27.905 ] 00:26:27.905 }, 00:26:27.905 { 00:26:27.905 "subsystem": "nbd", 00:26:27.905 "config": [] 00:26:27.905 } 00:26:27.905 ] 00:26:27.905 }' 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3201325 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3201325 ']' 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3201325 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3201325 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3201325' 00:26:27.905 killing process with pid 3201325 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3201325 00:26:27.905 Received shutdown signal, test time was about 1.000000 seconds 00:26:27.905 00:26:27.905 Latency(us) 00:26:27.905 [2024-11-05T15:50:34.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.905 [2024-11-05T15:50:34.968Z] =================================================================================================================== 00:26:27.905 [2024-11-05T15:50:34.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.905 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3201325 00:26:28.167 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3201250 00:26:28.167 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3201250 ']' 00:26:28.167 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3201250 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3201250 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3201250' 00:26:28.167 killing process with pid 3201250 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3201250 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3201250 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.167 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:26:28.167 "subsystems": [ 00:26:28.167 { 00:26:28.167 "subsystem": "keyring", 00:26:28.167 "config": [ 00:26:28.167 { 00:26:28.167 "method": "keyring_file_add_key", 00:26:28.167 "params": { 00:26:28.167 "name": "key0", 00:26:28.167 "path": "/tmp/tmp.kLnevLjGsF" 00:26:28.167 } 00:26:28.167 } 00:26:28.167 ] 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "subsystem": "iobuf", 00:26:28.167 "config": [ 00:26:28.167 { 00:26:28.167 "method": "iobuf_set_options", 00:26:28.167 "params": { 00:26:28.167 "small_pool_count": 8192, 00:26:28.167 "large_pool_count": 1024, 00:26:28.167 "small_bufsize": 8192, 00:26:28.167 "large_bufsize": 135168, 00:26:28.167 "enable_numa": false 00:26:28.167 } 00:26:28.167 } 00:26:28.167 ] 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "subsystem": "sock", 00:26:28.167 "config": [ 00:26:28.167 { 00:26:28.167 "method": "sock_set_default_impl", 00:26:28.167 "params": { 00:26:28.167 "impl_name": "posix" 00:26:28.167 } 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "method": "sock_impl_set_options", 00:26:28.167 "params": { 00:26:28.167 "impl_name": "ssl", 00:26:28.167 "recv_buf_size": 4096, 00:26:28.167 "send_buf_size": 4096, 00:26:28.167 "enable_recv_pipe": true, 00:26:28.167 "enable_quickack": false, 00:26:28.167 "enable_placement_id": 0, 00:26:28.167 "enable_zerocopy_send_server": true, 00:26:28.167 "enable_zerocopy_send_client": false, 00:26:28.167 "zerocopy_threshold": 0, 00:26:28.167 "tls_version": 0, 00:26:28.167 "enable_ktls": false 00:26:28.167 } 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "method": "sock_impl_set_options", 00:26:28.167 "params": { 00:26:28.167 "impl_name": "posix", 00:26:28.167 "recv_buf_size": 2097152, 00:26:28.167 "send_buf_size": 2097152, 00:26:28.167 "enable_recv_pipe": true, 00:26:28.167 "enable_quickack": false, 00:26:28.167 "enable_placement_id": 0, 00:26:28.167 "enable_zerocopy_send_server": true, 00:26:28.167 "enable_zerocopy_send_client": false, 00:26:28.167 "zerocopy_threshold": 0, 00:26:28.167 "tls_version": 0, 00:26:28.167 "enable_ktls": false 00:26:28.167 } 00:26:28.167 } 00:26:28.167 ] 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "subsystem": "vmd", 00:26:28.167 "config": [] 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "subsystem": "accel", 00:26:28.167 "config": [ 00:26:28.167 { 00:26:28.167 "method": "accel_set_options", 00:26:28.167 "params": { 00:26:28.167 "small_cache_size": 128, 00:26:28.167 "large_cache_size": 16, 00:26:28.167 "task_count": 2048, 00:26:28.167 "sequence_count": 2048, 00:26:28.167 "buf_count": 2048 00:26:28.167 } 00:26:28.167 } 00:26:28.167 ] 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "subsystem": "bdev", 00:26:28.167 "config": [ 00:26:28.167 { 00:26:28.167 "method": "bdev_set_options", 00:26:28.167 "params": { 00:26:28.167 "bdev_io_pool_size": 65535, 00:26:28.167 "bdev_io_cache_size": 256, 00:26:28.167 "bdev_auto_examine": true, 00:26:28.167 "iobuf_small_cache_size": 128, 00:26:28.167 "iobuf_large_cache_size": 16 00:26:28.167 } 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "method": "bdev_raid_set_options", 00:26:28.167 "params": { 00:26:28.167 "process_window_size_kb": 1024, 00:26:28.167 "process_max_bandwidth_mb_sec": 0 00:26:28.167 } 00:26:28.167 }, 00:26:28.167 { 00:26:28.167 "method": "bdev_iscsi_set_options", 00:26:28.167 "params": { 00:26:28.167 "timeout_sec": 30 00:26:28.167 } 00:26:28.167 }, 00:26:28.167 { 00:26:28.168 "method": "bdev_nvme_set_options", 00:26:28.168 "params": { 00:26:28.168 "action_on_timeout": "none", 00:26:28.168 "timeout_us": 0, 00:26:28.168 "timeout_admin_us": 0, 00:26:28.168 "keep_alive_timeout_ms": 10000, 00:26:28.168 "arbitration_burst": 0, 00:26:28.168 "low_priority_weight": 0, 00:26:28.168 "medium_priority_weight": 0, 00:26:28.168 "high_priority_weight": 0, 00:26:28.168 "nvme_adminq_poll_period_us": 10000, 00:26:28.168 "nvme_ioq_poll_period_us": 0, 00:26:28.168 "io_queue_requests": 0, 00:26:28.168 "delay_cmd_submit": true, 00:26:28.168 "transport_retry_count": 4, 00:26:28.168 "bdev_retry_count": 3, 00:26:28.168 "transport_ack_timeout": 0, 00:26:28.168 "ctrlr_loss_timeout_sec": 0, 00:26:28.168 "reconnect_delay_sec": 0, 00:26:28.168 "fast_io_fail_timeout_sec": 0, 00:26:28.168 "disable_auto_failback": false, 00:26:28.168 "generate_uuids": false, 00:26:28.168 "transport_tos": 0, 00:26:28.168 "nvme_error_stat": false, 00:26:28.168 "rdma_srq_size": 0, 00:26:28.168 "io_path_stat": false, 00:26:28.168 "allow_accel_sequence": false, 00:26:28.168 "rdma_max_cq_size": 0, 00:26:28.168 "rdma_cm_event_timeout_ms": 0, 00:26:28.168 "dhchap_digests": [ 00:26:28.168 "sha256", 00:26:28.168 "sha384", 00:26:28.168 "sha512" 00:26:28.168 ], 00:26:28.168 "dhchap_dhgroups": [ 00:26:28.168 "null", 00:26:28.168 "ffdhe2048", 00:26:28.168 "ffdhe3072", 00:26:28.168 "ffdhe4096", 00:26:28.168 "ffdhe6144", 00:26:28.168 "ffdhe8192" 00:26:28.168 ] 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "bdev_nvme_set_hotplug", 00:26:28.168 "params": { 00:26:28.168 "period_us": 100000, 00:26:28.168 "enable": false 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "bdev_malloc_create", 00:26:28.168 "params": { 00:26:28.168 "name": "malloc0", 00:26:28.168 "num_blocks": 8192, 00:26:28.168 "block_size": 4096, 00:26:28.168 "physical_block_size": 4096, 00:26:28.168 "uuid": "68dce871-e158-4f02-a42f-1ce51d0224cb", 00:26:28.168 "optimal_io_boundary": 0, 00:26:28.168 "md_size": 0, 00:26:28.168 "dif_type": 0, 00:26:28.168 "dif_is_head_of_md": false, 00:26:28.168 "dif_pi_format": 0 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "bdev_wait_for_examine" 00:26:28.168 } 00:26:28.168 ] 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "subsystem": "nbd", 00:26:28.168 "config": [] 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "subsystem": "scheduler", 00:26:28.168 "config": [ 00:26:28.168 { 00:26:28.168 "method": "framework_set_scheduler", 00:26:28.168 "params": { 00:26:28.168 "name": "static" 00:26:28.168 } 00:26:28.168 } 00:26:28.168 ] 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "subsystem": "nvmf", 00:26:28.168 "config": [ 00:26:28.168 { 00:26:28.168 "method": "nvmf_set_config", 00:26:28.168 "params": { 00:26:28.168 "discovery_filter": "match_any", 00:26:28.168 "admin_cmd_passthru": { 00:26:28.168 "identify_ctrlr": false 00:26:28.168 }, 00:26:28.168 "dhchap_digests": [ 00:26:28.168 "sha256", 00:26:28.168 "sha384", 00:26:28.168 "sha512" 00:26:28.168 ], 00:26:28.168 "dhchap_dhgroups": [ 00:26:28.168 "null", 00:26:28.168 "ffdhe2048", 00:26:28.168 "ffdhe3072", 00:26:28.168 "ffdhe4096", 00:26:28.168 "ffdhe6144", 00:26:28.168 "ffdhe8192" 00:26:28.168 ] 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_set_max_subsystems", 00:26:28.168 "params": { 00:26:28.168 "max_subsystems": 1024 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_set_crdt", 00:26:28.168 "params": { 00:26:28.168 "crdt1": 0, 00:26:28.168 "crdt2": 0, 00:26:28.168 "crdt3": 0 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_create_transport", 00:26:28.168 "params": { 00:26:28.168 "trtype": "TCP", 00:26:28.168 "max_queue_depth": 128, 00:26:28.168 "max_io_qpairs_per_ctrlr": 127, 00:26:28.168 "in_capsule_data_size": 4096, 00:26:28.168 "max_io_size": 131072, 00:26:28.168 "io_unit_size": 131072, 00:26:28.168 "max_aq_depth": 128, 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.168 "num_shared_buffers": 511, 00:26:28.168 "buf_cache_size": 4294967295, 00:26:28.168 "dif_insert_or_strip": false, 00:26:28.168 "zcopy": false, 00:26:28.168 "c2h_success": false, 00:26:28.168 "sock_priority": 0, 00:26:28.168 "abort_timeout_sec": 1, 00:26:28.168 "ack_timeout": 0, 00:26:28.168 "data_wr_pool_size": 0 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_create_subsystem", 00:26:28.168 "params": { 00:26:28.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.168 "allow_any_host": false, 00:26:28.168 "serial_number": "00000000000000000000", 00:26:28.168 "model_number": "SPDK bdev Controller", 00:26:28.168 "max_namespaces": 32, 00:26:28.168 "min_cntlid": 1, 00:26:28.168 "max_cntlid": 65519, 00:26:28.168 "ana_reporting": false 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_subsystem_add_host", 00:26:28.168 "params": { 00:26:28.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.168 "host": "nqn.2016-06.io.spdk:host1", 00:26:28.168 "psk": "key0" 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_subsystem_add_ns", 00:26:28.168 "params": { 00:26:28.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.168 "namespace": { 00:26:28.168 "nsid": 1, 00:26:28.168 "bdev_name": "malloc0", 00:26:28.168 "nguid": "68DCE871E1584F02A42F1CE51D0224CB", 00:26:28.168 "uuid": "68dce871-e158-4f02-a42f-1ce51d0224cb", 00:26:28.168 "no_auto_visible": false 00:26:28.168 } 00:26:28.168 } 00:26:28.168 }, 00:26:28.168 { 00:26:28.168 "method": "nvmf_subsystem_add_listener", 00:26:28.168 "params": { 00:26:28.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.168 "listen_address": { 00:26:28.168 "trtype": "TCP", 00:26:28.168 "adrfam": "IPv4", 00:26:28.168 "traddr": "10.0.0.2", 00:26:28.168 "trsvcid": "4420" 00:26:28.168 }, 00:26:28.168 "secure_channel": false, 00:26:28.168 "sock_impl": "ssl" 00:26:28.168 } 00:26:28.168 } 00:26:28.168 ] 00:26:28.168 } 00:26:28.168 ] 00:26:28.168 }' 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3201953 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3201953 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3201953 ']' 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:28.168 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.429 [2024-11-05 16:50:35.252280] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:28.429 [2024-11-05 16:50:35.252332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.429 [2024-11-05 16:50:35.329373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.429 [2024-11-05 16:50:35.362905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.429 [2024-11-05 16:50:35.362940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.429 [2024-11-05 16:50:35.362949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.429 [2024-11-05 16:50:35.362956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.430 [2024-11-05 16:50:35.362961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.430 [2024-11-05 16:50:35.363518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.690 [2024-11-05 16:50:35.562287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.690 [2024-11-05 16:50:35.594302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:28.690 [2024-11-05 16:50:35.594542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3202302 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3202302 /var/tmp/bdevperf.sock 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3202302 ']' 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:29.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:29.262 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:26:29.262 "subsystems": [ 00:26:29.262 { 00:26:29.262 "subsystem": "keyring", 00:26:29.262 "config": [ 00:26:29.262 { 00:26:29.262 "method": "keyring_file_add_key", 00:26:29.262 "params": { 00:26:29.262 "name": "key0", 00:26:29.262 "path": "/tmp/tmp.kLnevLjGsF" 00:26:29.262 } 00:26:29.262 } 00:26:29.262 ] 00:26:29.262 }, 00:26:29.262 { 00:26:29.262 "subsystem": "iobuf", 00:26:29.262 "config": [ 00:26:29.262 { 00:26:29.262 "method": "iobuf_set_options", 00:26:29.262 "params": { 00:26:29.262 "small_pool_count": 8192, 00:26:29.262 "large_pool_count": 1024, 00:26:29.263 "small_bufsize": 8192, 00:26:29.263 "large_bufsize": 135168, 00:26:29.263 "enable_numa": false 00:26:29.263 } 00:26:29.263 } 00:26:29.263 ] 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "subsystem": "sock", 00:26:29.263 "config": [ 00:26:29.263 { 00:26:29.263 "method": "sock_set_default_impl", 00:26:29.263 "params": { 00:26:29.263 "impl_name": "posix" 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "sock_impl_set_options", 00:26:29.263 "params": { 00:26:29.263 "impl_name": "ssl", 00:26:29.263 "recv_buf_size": 4096, 00:26:29.263 "send_buf_size": 4096, 00:26:29.263 "enable_recv_pipe": true, 00:26:29.263 "enable_quickack": false, 00:26:29.263 "enable_placement_id": 0, 00:26:29.263 "enable_zerocopy_send_server": true, 00:26:29.263 "enable_zerocopy_send_client": false, 00:26:29.263 "zerocopy_threshold": 0, 00:26:29.263 "tls_version": 0, 00:26:29.263 "enable_ktls": false 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "sock_impl_set_options", 00:26:29.263 "params": { 00:26:29.263 "impl_name": "posix", 00:26:29.263 "recv_buf_size": 2097152, 00:26:29.263 "send_buf_size": 2097152, 00:26:29.263 "enable_recv_pipe": true, 00:26:29.263 "enable_quickack": false, 00:26:29.263 "enable_placement_id": 0, 00:26:29.263 "enable_zerocopy_send_server": true, 00:26:29.263 "enable_zerocopy_send_client": false, 00:26:29.263 "zerocopy_threshold": 0, 00:26:29.263 "tls_version": 0, 00:26:29.263 "enable_ktls": false 00:26:29.263 } 00:26:29.263 } 00:26:29.263 ] 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "subsystem": "vmd", 00:26:29.263 "config": [] 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "subsystem": "accel", 00:26:29.263 "config": [ 00:26:29.263 { 00:26:29.263 "method": "accel_set_options", 00:26:29.263 "params": { 00:26:29.263 "small_cache_size": 128, 00:26:29.263 "large_cache_size": 16, 00:26:29.263 "task_count": 2048, 00:26:29.263 "sequence_count": 2048, 00:26:29.263 "buf_count": 2048 00:26:29.263 } 00:26:29.263 } 00:26:29.263 ] 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "subsystem": "bdev", 00:26:29.263 "config": [ 00:26:29.263 { 00:26:29.263 "method": "bdev_set_options", 00:26:29.263 "params": { 00:26:29.263 "bdev_io_pool_size": 65535, 00:26:29.263 "bdev_io_cache_size": 256, 00:26:29.263 "bdev_auto_examine": true, 00:26:29.263 "iobuf_small_cache_size": 128, 00:26:29.263 "iobuf_large_cache_size": 16 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_raid_set_options", 00:26:29.263 "params": { 00:26:29.263 "process_window_size_kb": 1024, 00:26:29.263 "process_max_bandwidth_mb_sec": 0 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_iscsi_set_options", 00:26:29.263 "params": { 00:26:29.263 "timeout_sec": 30 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_nvme_set_options", 00:26:29.263 "params": { 00:26:29.263 "action_on_timeout": "none", 00:26:29.263 "timeout_us": 0, 00:26:29.263 "timeout_admin_us": 0, 00:26:29.263 "keep_alive_timeout_ms": 10000, 00:26:29.263 "arbitration_burst": 0, 00:26:29.263 "low_priority_weight": 0, 00:26:29.263 "medium_priority_weight": 0, 00:26:29.263 "high_priority_weight": 0, 00:26:29.263 "nvme_adminq_poll_period_us": 10000, 00:26:29.263 "nvme_ioq_poll_period_us": 0, 00:26:29.263 "io_queue_requests": 512, 00:26:29.263 "delay_cmd_submit": true, 00:26:29.263 "transport_retry_count": 4, 00:26:29.263 "bdev_retry_count": 3, 00:26:29.263 "transport_ack_timeout": 0, 00:26:29.263 "ctrlr_loss_timeout_sec": 0, 00:26:29.263 "reconnect_delay_sec": 0, 00:26:29.263 "fast_io_fail_timeout_sec": 0, 00:26:29.263 "disable_auto_failback": false, 00:26:29.263 "generate_uuids": false, 00:26:29.263 "transport_tos": 0, 00:26:29.263 "nvme_error_stat": false, 00:26:29.263 "rdma_srq_size": 0, 00:26:29.263 "io_path_stat": false, 00:26:29.263 "allow_accel_sequence": false, 00:26:29.263 "rdma_max_cq_size": 0, 00:26:29.263 "rdma_cm_event_timeout_ms": 0, 00:26:29.263 "dhchap_digests": [ 00:26:29.263 "sha256", 00:26:29.263 "sha384", 00:26:29.263 "sha512" 00:26:29.263 ], 00:26:29.263 "dhchap_dhgroups": [ 00:26:29.263 "null", 00:26:29.263 "ffdhe2048", 00:26:29.263 "ffdhe3072", 00:26:29.263 "ffdhe4096", 00:26:29.263 "ffdhe6144", 00:26:29.263 "ffdhe8192" 00:26:29.263 ] 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_nvme_attach_controller", 00:26:29.263 "params": { 00:26:29.263 "name": "nvme0", 00:26:29.263 "trtype": "TCP", 00:26:29.263 "adrfam": "IPv4", 00:26:29.263 "traddr": "10.0.0.2", 00:26:29.263 "trsvcid": "4420", 00:26:29.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.263 "prchk_reftag": false, 00:26:29.263 "prchk_guard": false, 00:26:29.263 "ctrlr_loss_timeout_sec": 0, 00:26:29.263 "reconnect_delay_sec": 0, 00:26:29.263 "fast_io_fail_timeout_sec": 0, 00:26:29.263 "psk": "key0", 00:26:29.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.263 "hdgst": false, 00:26:29.263 "ddgst": false, 00:26:29.263 "multipath": "multipath" 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_nvme_set_hotplug", 00:26:29.263 "params": { 00:26:29.263 "period_us": 100000, 00:26:29.263 "enable": false 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_enable_histogram", 00:26:29.263 "params": { 00:26:29.263 "name": "nvme0n1", 00:26:29.263 "enable": true 00:26:29.263 } 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "method": "bdev_wait_for_examine" 00:26:29.263 } 00:26:29.263 ] 00:26:29.263 }, 00:26:29.263 { 00:26:29.263 "subsystem": "nbd", 00:26:29.263 "config": [] 00:26:29.263 } 00:26:29.263 ] 00:26:29.263 }' 00:26:29.263 [2024-11-05 16:50:36.138209] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:29.263 [2024-11-05 16:50:36.138263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202302 ] 00:26:29.263 [2024-11-05 16:50:36.220919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.263 [2024-11-05 16:50:36.250228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.524 [2024-11-05 16:50:36.385061] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:30.094 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:30.094 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:26:30.094 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.094 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:26:30.094 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.094 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:30.354 Running I/O for 1 seconds... 00:26:31.339 4950.00 IOPS, 19.34 MiB/s 00:26:31.339 Latency(us) 00:26:31.339 [2024-11-05T15:50:38.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.339 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:31.339 Verification LBA range: start 0x0 length 0x2000 00:26:31.339 nvme0n1 : 1.06 4813.67 18.80 0.00 0.00 25957.71 6553.60 51554.99 00:26:31.339 [2024-11-05T15:50:38.402Z] =================================================================================================================== 00:26:31.339 [2024-11-05T15:50:38.402Z] Total : 4813.67 18.80 0.00 0.00 25957.71 6553.60 51554.99 00:26:31.339 { 00:26:31.339 "results": [ 00:26:31.339 { 00:26:31.339 "job": "nvme0n1", 00:26:31.339 "core_mask": "0x2", 00:26:31.339 "workload": "verify", 00:26:31.339 "status": "finished", 00:26:31.339 "verify_range": { 00:26:31.339 "start": 0, 00:26:31.339 "length": 8192 00:26:31.339 }, 00:26:31.339 "queue_depth": 128, 00:26:31.339 "io_size": 4096, 00:26:31.339 "runtime": 1.05512, 00:26:31.339 "iops": 4813.670482978239, 00:26:31.339 "mibps": 18.803400324133747, 00:26:31.339 "io_failed": 0, 00:26:31.339 "io_timeout": 0, 00:26:31.339 "avg_latency_us": 25957.70711819912, 00:26:31.339 "min_latency_us": 6553.6, 00:26:31.339 "max_latency_us": 51554.986666666664 00:26:31.339 } 00:26:31.339 ], 00:26:31.339 "core_count": 1 00:26:31.339 } 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:31.339 nvmf_trace.0 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3202302 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3202302 ']' 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3202302 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:31.339 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3202302 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3202302' 00:26:31.629 killing process with pid 3202302 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3202302 00:26:31.629 Received shutdown signal, test time was about 1.000000 seconds 00:26:31.629 00:26:31.629 Latency(us) 00:26:31.629 [2024-11-05T15:50:38.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.629 [2024-11-05T15:50:38.692Z] =================================================================================================================== 00:26:31.629 [2024-11-05T15:50:38.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3202302 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:31.629 rmmod nvme_tcp 00:26:31.629 rmmod nvme_fabrics 00:26:31.629 rmmod nvme_keyring 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 3201953 ']' 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 3201953 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3201953 ']' 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3201953 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3201953 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3201953' 00:26:31.629 killing process with pid 3201953 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3201953 00:26:31.629 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3201953 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:31.917 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2peQU8JQgC /tmp/tmp.OO02kbRYCJ /tmp/tmp.kLnevLjGsF 00:26:33.831 00:26:33.831 real 1m21.202s 00:26:33.831 user 2m5.422s 00:26:33.831 sys 0m26.747s 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:33.831 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:33.831 ************************************ 00:26:33.831 END TEST nvmf_tls 00:26:33.831 ************************************ 00:26:34.094 16:50:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:34.094 16:50:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:34.094 16:50:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:34.094 16:50:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:34.094 ************************************ 00:26:34.094 START TEST nvmf_fips 00:26:34.094 ************************************ 00:26:34.094 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:34.094 * Looking for test storage... 00:26:34.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.094 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:34.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.094 --rc genhtml_branch_coverage=1 00:26:34.095 --rc genhtml_function_coverage=1 00:26:34.095 --rc genhtml_legend=1 00:26:34.095 --rc geninfo_all_blocks=1 00:26:34.095 --rc geninfo_unexecuted_blocks=1 00:26:34.095 00:26:34.095 ' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.095 --rc genhtml_branch_coverage=1 00:26:34.095 --rc genhtml_function_coverage=1 00:26:34.095 --rc genhtml_legend=1 00:26:34.095 --rc geninfo_all_blocks=1 00:26:34.095 --rc geninfo_unexecuted_blocks=1 00:26:34.095 00:26:34.095 ' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.095 --rc genhtml_branch_coverage=1 00:26:34.095 --rc genhtml_function_coverage=1 00:26:34.095 --rc genhtml_legend=1 00:26:34.095 --rc geninfo_all_blocks=1 00:26:34.095 --rc geninfo_unexecuted_blocks=1 00:26:34.095 00:26:34.095 ' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.095 --rc genhtml_branch_coverage=1 00:26:34.095 --rc genhtml_function_coverage=1 00:26:34.095 --rc genhtml_legend=1 00:26:34.095 --rc geninfo_all_blocks=1 00:26:34.095 --rc geninfo_unexecuted_blocks=1 00:26:34.095 00:26:34.095 ' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:34.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:26:34.095 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:26:34.357 Error setting digest 00:26:34.357 4002B852AD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:26:34.357 4002B852AD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:34.357 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:34.358 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:34.358 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:34.358 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:26:34.358 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:40.947 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:40.947 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:40.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:40.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@247 -- # create_target_ns 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:40.947 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:40.948 10.0.0.1 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:40.948 10.0.0.2 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:40.948 16:50:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:40.948 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:40.948 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:40.948 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:40.948 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:40.948 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:40.948 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:41.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.624 ms 00:26:41.210 00:26:41.210 --- 10.0.0.1 ping statistics --- 00:26:41.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.210 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:41.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:26:41.210 00:26:41.210 --- 10.0.0.2 ping statistics --- 00:26:41.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.210 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:41.210 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:41.211 ' 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:41.211 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=3206921 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 3206921 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3206921 ']' 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:41.473 16:50:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:41.473 [2024-11-05 16:50:48.371039] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:41.473 [2024-11-05 16:50:48.371111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.473 [2024-11-05 16:50:48.473412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.473 [2024-11-05 16:50:48.524329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.473 [2024-11-05 16:50:48.524383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.473 [2024-11-05 16:50:48.524392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.473 [2024-11-05 16:50:48.524400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.473 [2024-11-05 16:50:48.524406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.473 [2024-11-05 16:50:48.525180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.dXW 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.dXW 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.dXW 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.dXW 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.418 [2024-11-05 16:50:49.391950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.418 [2024-11-05 16:50:49.407947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:42.418 [2024-11-05 16:50:49.408274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.418 malloc0 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3207069 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3207069 /var/tmp/bdevperf.sock 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3207069 ']' 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.418 16:50:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:42.679 [2024-11-05 16:50:49.556215] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:26:42.680 [2024-11-05 16:50:49.556294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207069 ] 00:26:42.680 [2024-11-05 16:50:49.620354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.680 [2024-11-05 16:50:49.657143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.621 16:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.621 16:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:26:43.621 16:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.dXW 00:26:43.621 16:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:43.621 [2024-11-05 16:50:50.652475] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:43.881 TLSTESTn1 00:26:43.881 16:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.881 Running I/O for 10 seconds... 00:26:46.209 5752.00 IOPS, 22.47 MiB/s [2024-11-05T15:50:54.214Z] 5752.00 IOPS, 22.47 MiB/s [2024-11-05T15:50:55.157Z] 5667.00 IOPS, 22.14 MiB/s [2024-11-05T15:50:56.100Z] 5690.50 IOPS, 22.23 MiB/s [2024-11-05T15:50:57.042Z] 5686.60 IOPS, 22.21 MiB/s [2024-11-05T15:50:57.983Z] 5729.83 IOPS, 22.38 MiB/s [2024-11-05T15:50:58.925Z] 5576.00 IOPS, 21.78 MiB/s [2024-11-05T15:50:59.868Z] 5624.38 IOPS, 21.97 MiB/s [2024-11-05T15:51:01.254Z] 5662.33 IOPS, 22.12 MiB/s [2024-11-05T15:51:01.254Z] 5657.90 IOPS, 22.10 MiB/s 00:26:54.191 Latency(us) 00:26:54.191 [2024-11-05T15:51:01.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.191 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:54.191 Verification LBA range: start 0x0 length 0x2000 00:26:54.191 TLSTESTn1 : 10.05 5642.22 22.04 0.00 0.00 22618.81 4778.67 49588.91 00:26:54.191 [2024-11-05T15:51:01.254Z] =================================================================================================================== 00:26:54.191 [2024-11-05T15:51:01.254Z] Total : 5642.22 22.04 0.00 0.00 22618.81 4778.67 49588.91 00:26:54.191 { 00:26:54.191 "results": [ 00:26:54.191 { 00:26:54.191 "job": "TLSTESTn1", 00:26:54.191 "core_mask": "0x4", 00:26:54.191 "workload": "verify", 00:26:54.191 "status": "finished", 00:26:54.191 "verify_range": { 00:26:54.191 "start": 0, 00:26:54.191 "length": 8192 00:26:54.191 }, 00:26:54.191 "queue_depth": 128, 00:26:54.191 "io_size": 4096, 00:26:54.191 "runtime": 10.050296, 00:26:54.191 "iops": 5642.221880828187, 00:26:54.191 "mibps": 22.039929221985105, 00:26:54.191 "io_failed": 0, 00:26:54.191 "io_timeout": 0, 00:26:54.191 "avg_latency_us": 22618.813711894094, 00:26:54.191 "min_latency_us": 4778.666666666667, 00:26:54.191 "max_latency_us": 49588.90666666667 00:26:54.191 } 00:26:54.191 ], 00:26:54.191 "core_count": 1 00:26:54.191 } 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:26:54.191 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:54.191 nvmf_trace.0 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3207069 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3207069 ']' 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3207069 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3207069 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3207069' 00:26:54.191 killing process with pid 3207069 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3207069 00:26:54.191 Received shutdown signal, test time was about 10.000000 seconds 00:26:54.191 00:26:54.191 Latency(us) 00:26:54.191 [2024-11-05T15:51:01.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.191 [2024-11-05T15:51:01.254Z] =================================================================================================================== 00:26:54.191 [2024-11-05T15:51:01.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3207069 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:54.191 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:54.191 rmmod nvme_tcp 00:26:54.191 rmmod nvme_fabrics 00:26:54.191 rmmod nvme_keyring 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 3206921 ']' 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 3206921 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3206921 ']' 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3206921 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3206921 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3206921' 00:26:54.452 killing process with pid 3206921 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3206921 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3206921 00:26:54.452 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:54.453 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:26:54.453 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:26:54.453 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:54.453 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:54.453 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:54.453 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.dXW 00:26:56.999 00:26:56.999 real 0m22.595s 00:26:56.999 user 0m24.605s 00:26:56.999 sys 0m9.226s 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:56.999 ************************************ 00:26:56.999 END TEST nvmf_fips 00:26:56.999 ************************************ 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:56.999 ************************************ 00:26:56.999 START TEST nvmf_control_msg_list 00:26:56.999 ************************************ 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:56.999 * Looking for test storage... 00:26:56.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.999 --rc genhtml_branch_coverage=1 00:26:56.999 --rc genhtml_function_coverage=1 00:26:56.999 --rc genhtml_legend=1 00:26:56.999 --rc geninfo_all_blocks=1 00:26:56.999 --rc geninfo_unexecuted_blocks=1 00:26:56.999 00:26:56.999 ' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.999 --rc genhtml_branch_coverage=1 00:26:56.999 --rc genhtml_function_coverage=1 00:26:56.999 --rc genhtml_legend=1 00:26:56.999 --rc geninfo_all_blocks=1 00:26:56.999 --rc geninfo_unexecuted_blocks=1 00:26:56.999 00:26:56.999 ' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.999 --rc genhtml_branch_coverage=1 00:26:56.999 --rc genhtml_function_coverage=1 00:26:56.999 --rc genhtml_legend=1 00:26:56.999 --rc geninfo_all_blocks=1 00:26:56.999 --rc geninfo_unexecuted_blocks=1 00:26:56.999 00:26:56.999 ' 00:26:56.999 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.999 --rc genhtml_branch_coverage=1 00:26:56.999 --rc genhtml_function_coverage=1 00:26:56.999 --rc genhtml_legend=1 00:26:56.999 --rc geninfo_all_blocks=1 00:26:56.999 --rc geninfo_unexecuted_blocks=1 00:26:56.999 00:26:56.999 ' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:57.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:26:57.000 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:05.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:05.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:05.162 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:05.162 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@247 -- # create_target_ns 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:05.162 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:05.162 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:05.163 10.0.0.1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:05.163 10.0.0.2 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:05.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.611 ms 00:27:05.163 00:27:05.163 --- 10.0.0.1 ping statistics --- 00:27:05.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.163 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:27:05.163 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:05.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:27:05.164 00:27:05.164 --- 10.0.0.2 ping statistics --- 00:27:05.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.164 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:05.164 ' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=3214329 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 3214329 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3214329 ']' 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:05.164 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 [2024-11-05 16:51:11.540934] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:27:05.165 [2024-11-05 16:51:11.540983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.165 [2024-11-05 16:51:11.620051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.165 [2024-11-05 16:51:11.655727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.165 [2024-11-05 16:51:11.655767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.165 [2024-11-05 16:51:11.655775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.165 [2024-11-05 16:51:11.655782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.165 [2024-11-05 16:51:11.655787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.165 [2024-11-05 16:51:11.656353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 [2024-11-05 16:51:11.779392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 Malloc0 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:05.165 [2024-11-05 16:51:11.830321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3214361 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3214362 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3214363 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3214361 00:27:05.165 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.165 [2024-11-05 16:51:11.901028] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:05.165 [2024-11-05 16:51:11.901311] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:05.165 [2024-11-05 16:51:11.901573] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:06.105 Initializing NVMe Controllers 00:27:06.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:06.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:27:06.105 Initialization complete. Launching workers. 00:27:06.105 ======================================================== 00:27:06.105 Latency(us) 00:27:06.105 Device Information : IOPS MiB/s Average min max 00:27:06.105 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40898.11 40787.20 40953.32 00:27:06.105 ======================================================== 00:27:06.105 Total : 25.00 0.10 40898.11 40787.20 40953.32 00:27:06.105 00:27:06.105 Initializing NVMe Controllers 00:27:06.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:06.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:27:06.105 Initialization complete. Launching workers. 00:27:06.105 ======================================================== 00:27:06.105 Latency(us) 00:27:06.105 Device Information : IOPS MiB/s Average min max 00:27:06.105 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40919.49 40891.19 41288.72 00:27:06.105 ======================================================== 00:27:06.106 Total : 25.00 0.10 40919.49 40891.19 41288.72 00:27:06.106 00:27:06.106 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3214362 00:27:06.106 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3214363 00:27:06.106 Initializing NVMe Controllers 00:27:06.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:06.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:27:06.106 Initialization complete. Launching workers. 00:27:06.106 ======================================================== 00:27:06.106 Latency(us) 00:27:06.106 Device Information : IOPS MiB/s Average min max 00:27:06.106 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40899.88 40813.90 40997.31 00:27:06.106 ======================================================== 00:27:06.106 Total : 25.00 0.10 40899.88 40813.90 40997.31 00:27:06.106 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:06.106 rmmod nvme_tcp 00:27:06.106 rmmod nvme_fabrics 00:27:06.106 rmmod nvme_keyring 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 3214329 ']' 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 3214329 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3214329 ']' 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3214329 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3214329 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3214329' 00:27:06.106 killing process with pid 3214329 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3214329 00:27:06.106 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3214329 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:06.366 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:08.274 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:08.535 00:27:08.535 real 0m11.730s 00:27:08.535 user 0m7.171s 00:27:08.535 sys 0m6.347s 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.535 ************************************ 00:27:08.535 END TEST nvmf_control_msg_list 00:27:08.535 ************************************ 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:08.535 ************************************ 00:27:08.535 START TEST nvmf_wait_for_buf 00:27:08.535 ************************************ 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:08.535 * Looking for test storage... 00:27:08.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:08.535 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.796 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.797 --rc genhtml_branch_coverage=1 00:27:08.797 --rc genhtml_function_coverage=1 00:27:08.797 --rc genhtml_legend=1 00:27:08.797 --rc geninfo_all_blocks=1 00:27:08.797 --rc geninfo_unexecuted_blocks=1 00:27:08.797 00:27:08.797 ' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.797 --rc genhtml_branch_coverage=1 00:27:08.797 --rc genhtml_function_coverage=1 00:27:08.797 --rc genhtml_legend=1 00:27:08.797 --rc geninfo_all_blocks=1 00:27:08.797 --rc geninfo_unexecuted_blocks=1 00:27:08.797 00:27:08.797 ' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.797 --rc genhtml_branch_coverage=1 00:27:08.797 --rc genhtml_function_coverage=1 00:27:08.797 --rc genhtml_legend=1 00:27:08.797 --rc geninfo_all_blocks=1 00:27:08.797 --rc geninfo_unexecuted_blocks=1 00:27:08.797 00:27:08.797 ' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.797 --rc genhtml_branch_coverage=1 00:27:08.797 --rc genhtml_function_coverage=1 00:27:08.797 --rc genhtml_legend=1 00:27:08.797 --rc geninfo_all_blocks=1 00:27:08.797 --rc geninfo_unexecuted_blocks=1 00:27:08.797 00:27:08.797 ' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:08.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:08.797 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:08.798 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:27:08.798 16:51:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:16.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:16.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:16.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:16.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@247 -- # create_target_ns 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:16.934 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:16.935 10.0.0.1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:16.935 10.0.0.2 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:16.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.687 ms 00:27:16.935 00:27:16.935 --- 10.0.0.1 ping statistics --- 00:27:16.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.935 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:16.935 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:16.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:27:16.936 00:27:16.936 --- 10.0.0.2 ping statistics --- 00:27:16.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.936 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:16.936 ' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=3218718 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 3218718 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3218718 ']' 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.936 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:16.936 [2024-11-05 16:51:23.053849] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:27:16.936 [2024-11-05 16:51:23.053940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.936 [2024-11-05 16:51:23.137477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.936 [2024-11-05 16:51:23.178285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.936 [2024-11-05 16:51:23.178324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.936 [2024-11-05 16:51:23.178333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.936 [2024-11-05 16:51:23.178339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.937 [2024-11-05 16:51:23.178345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.937 [2024-11-05 16:51:23.178956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 Malloc0 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 [2024-11-05 16:51:23.977720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.937 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:17.197 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.197 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:17.197 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.197 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:17.197 [2024-11-05 16:51:24.013960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.197 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.197 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:17.197 [2024-11-05 16:51:24.116823] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:18.578 Initializing NVMe Controllers 00:27:18.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:18.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:27:18.578 Initialization complete. Launching workers. 00:27:18.578 ======================================================== 00:27:18.578 Latency(us) 00:27:18.578 Device Information : IOPS MiB/s Average min max 00:27:18.578 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165842.33 47874.64 191551.49 00:27:18.578 ======================================================== 00:27:18.578 Total : 25.00 3.12 165842.33 47874.64 191551.49 00:27:18.578 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:18.578 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:18.578 rmmod nvme_tcp 00:27:18.578 rmmod nvme_fabrics 00:27:18.578 rmmod nvme_keyring 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 3218718 ']' 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 3218718 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3218718 ']' 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3218718 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3218718 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3218718' 00:27:18.839 killing process with pid 3218718 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3218718 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3218718 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:18.839 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:21.382 00:27:21.382 real 0m12.492s 00:27:21.382 user 0m4.987s 00:27:21.382 sys 0m6.048s 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.382 ************************************ 00:27:21.382 END TEST nvmf_wait_for_buf 00:27:21.382 ************************************ 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:27:21.382 16:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:27.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:27.965 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:27.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:27.965 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:27.966 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:27.966 ************************************ 00:27:27.966 START TEST nvmf_perf_adq 00:27:27.966 ************************************ 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:27.966 * Looking for test storage... 00:27:27.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:27.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.966 --rc genhtml_branch_coverage=1 00:27:27.966 --rc genhtml_function_coverage=1 00:27:27.966 --rc genhtml_legend=1 00:27:27.966 --rc geninfo_all_blocks=1 00:27:27.966 --rc geninfo_unexecuted_blocks=1 00:27:27.966 00:27:27.966 ' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:27.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.966 --rc genhtml_branch_coverage=1 00:27:27.966 --rc genhtml_function_coverage=1 00:27:27.966 --rc genhtml_legend=1 00:27:27.966 --rc geninfo_all_blocks=1 00:27:27.966 --rc geninfo_unexecuted_blocks=1 00:27:27.966 00:27:27.966 ' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:27.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.966 --rc genhtml_branch_coverage=1 00:27:27.966 --rc genhtml_function_coverage=1 00:27:27.966 --rc genhtml_legend=1 00:27:27.966 --rc geninfo_all_blocks=1 00:27:27.966 --rc geninfo_unexecuted_blocks=1 00:27:27.966 00:27:27.966 ' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:27.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.966 --rc genhtml_branch_coverage=1 00:27:27.966 --rc genhtml_function_coverage=1 00:27:27.966 --rc genhtml_legend=1 00:27:27.966 --rc geninfo_all_blocks=1 00:27:27.966 --rc geninfo_unexecuted_blocks=1 00:27:27.966 00:27:27.966 ' 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:27.966 16:51:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.966 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:27.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:27:27.967 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:36.099 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:36.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:36.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:36.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:36.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:36.100 16:51:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:36.361 16:51:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:38.301 16:51:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.685 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:43.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:43.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:43.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:43.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:43.686 10.0.0.1 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:27:43.686 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:43.687 10.0.0.2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:43.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.612 ms 00:27:43.687 00:27:43.687 --- 10.0.0.1 ping statistics --- 00:27:43.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.687 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:43.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:27:43.687 00:27:43.687 --- 10.0.0.2 ping statistics --- 00:27:43.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.687 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:43.687 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:43.688 ' 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=3229002 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 3229002 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3229002 ']' 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.688 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:43.950 [2024-11-05 16:51:50.781603] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:27:43.950 [2024-11-05 16:51:50.781673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.950 [2024-11-05 16:51:50.865139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.950 [2024-11-05 16:51:50.909708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.950 [2024-11-05 16:51:50.909744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.950 [2024-11-05 16:51:50.909763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.950 [2024-11-05 16:51:50.909770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.950 [2024-11-05 16:51:50.909776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.950 [2024-11-05 16:51:50.911297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.950 [2024-11-05 16:51:50.911413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.950 [2024-11-05 16:51:50.911569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.950 [2024-11-05 16:51:50.911571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.522 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:44.522 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:27:44.522 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:44.522 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:44.522 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.783 [2024-11-05 16:51:51.741339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.783 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.784 Malloc1 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.784 [2024-11-05 16:51:51.808115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3229352 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:44.784 16:51:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:47.328 "tick_rate": 2400000000, 00:27:47.328 "poll_groups": [ 00:27:47.328 { 00:27:47.328 "name": "nvmf_tgt_poll_group_000", 00:27:47.328 "admin_qpairs": 1, 00:27:47.328 "io_qpairs": 1, 00:27:47.328 "current_admin_qpairs": 1, 00:27:47.328 "current_io_qpairs": 1, 00:27:47.328 "pending_bdev_io": 0, 00:27:47.328 "completed_nvme_io": 18721, 00:27:47.328 "transports": [ 00:27:47.328 { 00:27:47.328 "trtype": "TCP" 00:27:47.328 } 00:27:47.328 ] 00:27:47.328 }, 00:27:47.328 { 00:27:47.328 "name": "nvmf_tgt_poll_group_001", 00:27:47.328 "admin_qpairs": 0, 00:27:47.328 "io_qpairs": 1, 00:27:47.328 "current_admin_qpairs": 0, 00:27:47.328 "current_io_qpairs": 1, 00:27:47.328 "pending_bdev_io": 0, 00:27:47.328 "completed_nvme_io": 27738, 00:27:47.328 "transports": [ 00:27:47.328 { 00:27:47.328 "trtype": "TCP" 00:27:47.328 } 00:27:47.328 ] 00:27:47.328 }, 00:27:47.328 { 00:27:47.328 "name": "nvmf_tgt_poll_group_002", 00:27:47.328 "admin_qpairs": 0, 00:27:47.328 "io_qpairs": 1, 00:27:47.328 "current_admin_qpairs": 0, 00:27:47.328 "current_io_qpairs": 1, 00:27:47.328 "pending_bdev_io": 0, 00:27:47.328 "completed_nvme_io": 21462, 00:27:47.328 "transports": [ 00:27:47.328 { 00:27:47.328 "trtype": "TCP" 00:27:47.328 } 00:27:47.328 ] 00:27:47.328 }, 00:27:47.328 { 00:27:47.328 "name": "nvmf_tgt_poll_group_003", 00:27:47.328 "admin_qpairs": 0, 00:27:47.328 "io_qpairs": 1, 00:27:47.328 "current_admin_qpairs": 0, 00:27:47.328 "current_io_qpairs": 1, 00:27:47.328 "pending_bdev_io": 0, 00:27:47.328 "completed_nvme_io": 20020, 00:27:47.328 "transports": [ 00:27:47.328 { 00:27:47.328 "trtype": "TCP" 00:27:47.328 } 00:27:47.328 ] 00:27:47.328 } 00:27:47.328 ] 00:27:47.328 }' 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:47.328 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3229352 00:27:55.460 Initializing NVMe Controllers 00:27:55.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:55.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:55.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:55.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:55.460 Initialization complete. Launching workers. 00:27:55.460 ======================================================== 00:27:55.460 Latency(us) 00:27:55.460 Device Information : IOPS MiB/s Average min max 00:27:55.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11113.20 43.41 5759.14 1576.97 8985.91 00:27:55.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14677.10 57.33 4359.80 1321.12 8957.40 00:27:55.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12562.40 49.07 5093.95 1297.16 50091.57 00:27:55.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14752.00 57.62 4351.59 1257.50 43905.21 00:27:55.460 ======================================================== 00:27:55.460 Total : 53104.70 207.44 4824.03 1257.50 50091.57 00:27:55.460 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:55.460 rmmod nvme_tcp 00:27:55.460 rmmod nvme_fabrics 00:27:55.460 rmmod nvme_keyring 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 3229002 ']' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 3229002 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3229002 ']' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3229002 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3229002 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3229002' 00:27:55.460 killing process with pid 3229002 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3229002 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3229002 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:55.460 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:57.368 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:59.278 16:52:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:01.192 16:52:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:06.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:06.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:06.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.484 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:06.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:06.485 10.0.0.1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:06.485 10.0.0.2 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:06.485 16:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:06.485 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:06.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.491 ms 00:28:06.486 00:28:06.486 --- 10.0.0.1 ping statistics --- 00:28:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.486 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:06.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:28:06.486 00:28:06.486 --- 10.0.0.2 ping statistics --- 00:28:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.486 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:06.486 ' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:06.486 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:06.487 net.core.busy_poll = 1 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:06.487 net.core.busy_read = 1 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=3233844 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 3233844 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3233844 ']' 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:06.487 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:06.487 [2024-11-05 16:52:13.497760] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:06.487 [2024-11-05 16:52:13.497825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.748 [2024-11-05 16:52:13.580561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.748 [2024-11-05 16:52:13.623046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.748 [2024-11-05 16:52:13.623083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.748 [2024-11-05 16:52:13.623091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.748 [2024-11-05 16:52:13.623098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.748 [2024-11-05 16:52:13.623104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.748 [2024-11-05 16:52:13.624674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.748 [2024-11-05 16:52:13.624807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.748 [2024-11-05 16:52:13.625140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.748 [2024-11-05 16:52:13.625141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.327 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 [2024-11-05 16:52:14.458768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 Malloc1 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.589 [2024-11-05 16:52:14.530040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3234196 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:07.589 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:09.506 "tick_rate": 2400000000, 00:28:09.506 "poll_groups": [ 00:28:09.506 { 00:28:09.506 "name": "nvmf_tgt_poll_group_000", 00:28:09.506 "admin_qpairs": 1, 00:28:09.506 "io_qpairs": 1, 00:28:09.506 "current_admin_qpairs": 1, 00:28:09.506 "current_io_qpairs": 1, 00:28:09.506 "pending_bdev_io": 0, 00:28:09.506 "completed_nvme_io": 26619, 00:28:09.506 "transports": [ 00:28:09.506 { 00:28:09.506 "trtype": "TCP" 00:28:09.506 } 00:28:09.506 ] 00:28:09.506 }, 00:28:09.506 { 00:28:09.506 "name": "nvmf_tgt_poll_group_001", 00:28:09.506 "admin_qpairs": 0, 00:28:09.506 "io_qpairs": 3, 00:28:09.506 "current_admin_qpairs": 0, 00:28:09.506 "current_io_qpairs": 3, 00:28:09.506 "pending_bdev_io": 0, 00:28:09.506 "completed_nvme_io": 40720, 00:28:09.506 "transports": [ 00:28:09.506 { 00:28:09.506 "trtype": "TCP" 00:28:09.506 } 00:28:09.506 ] 00:28:09.506 }, 00:28:09.506 { 00:28:09.506 "name": "nvmf_tgt_poll_group_002", 00:28:09.506 "admin_qpairs": 0, 00:28:09.506 "io_qpairs": 0, 00:28:09.506 "current_admin_qpairs": 0, 00:28:09.506 "current_io_qpairs": 0, 00:28:09.506 "pending_bdev_io": 0, 00:28:09.506 "completed_nvme_io": 0, 00:28:09.506 "transports": [ 00:28:09.506 { 00:28:09.506 "trtype": "TCP" 00:28:09.506 } 00:28:09.506 ] 00:28:09.506 }, 00:28:09.506 { 00:28:09.506 "name": "nvmf_tgt_poll_group_003", 00:28:09.506 "admin_qpairs": 0, 00:28:09.506 "io_qpairs": 0, 00:28:09.506 "current_admin_qpairs": 0, 00:28:09.506 "current_io_qpairs": 0, 00:28:09.506 "pending_bdev_io": 0, 00:28:09.506 "completed_nvme_io": 0, 00:28:09.506 "transports": [ 00:28:09.506 { 00:28:09.506 "trtype": "TCP" 00:28:09.506 } 00:28:09.506 ] 00:28:09.506 } 00:28:09.506 ] 00:28:09.506 }' 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:09.506 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:09.766 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:09.767 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:09.767 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3234196 00:28:17.907 Initializing NVMe Controllers 00:28:17.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:17.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:17.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:17.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:17.907 Initialization complete. Launching workers. 00:28:17.907 ======================================================== 00:28:17.907 Latency(us) 00:28:17.907 Device Information : IOPS MiB/s Average min max 00:28:17.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7318.13 28.59 8762.88 1155.46 54950.96 00:28:17.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6423.85 25.09 9995.13 1181.40 54824.53 00:28:17.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7513.03 29.35 8518.96 1125.50 55524.08 00:28:17.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 18738.67 73.20 3414.89 1155.58 45318.77 00:28:17.907 ======================================================== 00:28:17.907 Total : 39993.69 156.23 6409.23 1125.50 55524.08 00:28:17.907 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:17.908 rmmod nvme_tcp 00:28:17.908 rmmod nvme_fabrics 00:28:17.908 rmmod nvme_keyring 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 3233844 ']' 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 3233844 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3233844 ']' 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3233844 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3233844 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3233844' 00:28:17.908 killing process with pid 3233844 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3233844 00:28:17.908 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3233844 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:18.169 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:21.474 00:28:21.474 real 0m53.315s 00:28:21.474 user 2m49.772s 00:28:21.474 sys 0m11.290s 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.474 ************************************ 00:28:21.474 END TEST nvmf_perf_adq 00:28:21.474 ************************************ 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:21.474 ************************************ 00:28:21.474 START TEST nvmf_shutdown 00:28:21.474 ************************************ 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:21.474 * Looking for test storage... 00:28:21.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:21.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.474 --rc genhtml_branch_coverage=1 00:28:21.474 --rc genhtml_function_coverage=1 00:28:21.474 --rc genhtml_legend=1 00:28:21.474 --rc geninfo_all_blocks=1 00:28:21.474 --rc geninfo_unexecuted_blocks=1 00:28:21.474 00:28:21.474 ' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:21.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.474 --rc genhtml_branch_coverage=1 00:28:21.474 --rc genhtml_function_coverage=1 00:28:21.474 --rc genhtml_legend=1 00:28:21.474 --rc geninfo_all_blocks=1 00:28:21.474 --rc geninfo_unexecuted_blocks=1 00:28:21.474 00:28:21.474 ' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:21.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.474 --rc genhtml_branch_coverage=1 00:28:21.474 --rc genhtml_function_coverage=1 00:28:21.474 --rc genhtml_legend=1 00:28:21.474 --rc geninfo_all_blocks=1 00:28:21.474 --rc geninfo_unexecuted_blocks=1 00:28:21.474 00:28:21.474 ' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:21.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.474 --rc genhtml_branch_coverage=1 00:28:21.474 --rc genhtml_function_coverage=1 00:28:21.474 --rc genhtml_legend=1 00:28:21.474 --rc geninfo_all_blocks=1 00:28:21.474 --rc geninfo_unexecuted_blocks=1 00:28:21.474 00:28:21.474 ' 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.474 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:21.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.475 ************************************ 00:28:21.475 START TEST nvmf_shutdown_tc1 00:28:21.475 ************************************ 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:21.475 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:29.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:29.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:29.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:29.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:29.628 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:29.629 10.0.0.1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:29.629 10.0.0.2 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:29.629 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:29.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.666 ms 00:28:29.630 00:28:29.630 --- 10.0.0.1 ping statistics --- 00:28:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.630 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:29.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:28:29.630 00:28:29.630 --- 10.0.0.2 ping statistics --- 00:28:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.630 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:28:29.630 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:29.631 ' 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=3240682 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 3240682 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3240682 ']' 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:29.631 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.631 [2024-11-05 16:52:35.940728] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:29.631 [2024-11-05 16:52:35.940803] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.631 [2024-11-05 16:52:36.041766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.631 [2024-11-05 16:52:36.093731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.631 [2024-11-05 16:52:36.093807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.631 [2024-11-05 16:52:36.093816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.631 [2024-11-05 16:52:36.093823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.631 [2024-11-05 16:52:36.093829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.631 [2024-11-05 16:52:36.095782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.631 [2024-11-05 16:52:36.095981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.631 [2024-11-05 16:52:36.096145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.631 [2024-11-05 16:52:36.096145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.893 [2024-11-05 16:52:36.795507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.893 16:52:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.893 Malloc1 00:28:29.893 [2024-11-05 16:52:36.913879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.893 Malloc2 00:28:30.155 Malloc3 00:28:30.155 Malloc4 00:28:30.155 Malloc5 00:28:30.155 Malloc6 00:28:30.155 Malloc7 00:28:30.155 Malloc8 00:28:30.155 Malloc9 00:28:30.417 Malloc10 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3241068 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3241068 /var/tmp/bdevperf.sock 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3241068 ']' 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.417 { 00:28:30.417 "params": { 00:28:30.417 "name": "Nvme$subsystem", 00:28:30.417 "trtype": "$TEST_TRANSPORT", 00:28:30.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.417 "adrfam": "ipv4", 00:28:30.417 "trsvcid": "$NVMF_PORT", 00:28:30.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.417 "hdgst": ${hdgst:-false}, 00:28:30.417 "ddgst": ${ddgst:-false} 00:28:30.417 }, 00:28:30.417 "method": "bdev_nvme_attach_controller" 00:28:30.417 } 00:28:30.417 EOF 00:28:30.417 )") 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.417 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.417 { 00:28:30.417 "params": { 00:28:30.417 "name": "Nvme$subsystem", 00:28:30.417 "trtype": "$TEST_TRANSPORT", 00:28:30.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.417 "adrfam": "ipv4", 00:28:30.417 "trsvcid": "$NVMF_PORT", 00:28:30.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.417 "hdgst": ${hdgst:-false}, 00:28:30.417 "ddgst": ${ddgst:-false} 00:28:30.417 }, 00:28:30.417 "method": "bdev_nvme_attach_controller" 00:28:30.417 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 [2024-11-05 16:52:37.370812] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:30.418 [2024-11-05 16:52:37.370889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:30.418 { 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme$subsystem", 00:28:30.418 "trtype": "$TEST_TRANSPORT", 00:28:30.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "$NVMF_PORT", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.418 "hdgst": ${hdgst:-false}, 00:28:30.418 "ddgst": ${ddgst:-false} 00:28:30.418 }, 00:28:30.418 "method": "bdev_nvme_attach_controller" 00:28:30.418 } 00:28:30.418 EOF 00:28:30.418 )") 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:28:30.418 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:30.418 "params": { 00:28:30.418 "name": "Nvme1", 00:28:30.418 "trtype": "tcp", 00:28:30.418 "traddr": "10.0.0.2", 00:28:30.418 "adrfam": "ipv4", 00:28:30.418 "trsvcid": "4420", 00:28:30.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.418 "hdgst": false, 00:28:30.418 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme2", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme3", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme4", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme5", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme6", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme7", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme8", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme9", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 },{ 00:28:30.419 "params": { 00:28:30.419 "name": "Nvme10", 00:28:30.419 "trtype": "tcp", 00:28:30.419 "traddr": "10.0.0.2", 00:28:30.419 "adrfam": "ipv4", 00:28:30.419 "trsvcid": "4420", 00:28:30.419 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:30.419 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:30.419 "hdgst": false, 00:28:30.419 "ddgst": false 00:28:30.419 }, 00:28:30.419 "method": "bdev_nvme_attach_controller" 00:28:30.419 }' 00:28:30.419 [2024-11-05 16:52:37.448505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.680 [2024-11-05 16:52:37.485019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3241068 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:32.068 16:52:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:33.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3241068 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3240682 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.013 { 00:28:33.013 "params": { 00:28:33.013 "name": "Nvme$subsystem", 00:28:33.013 "trtype": "$TEST_TRANSPORT", 00:28:33.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.013 "adrfam": "ipv4", 00:28:33.013 "trsvcid": "$NVMF_PORT", 00:28:33.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.013 "hdgst": ${hdgst:-false}, 00:28:33.013 "ddgst": ${ddgst:-false} 00:28:33.013 }, 00:28:33.013 "method": "bdev_nvme_attach_controller" 00:28:33.013 } 00:28:33.013 EOF 00:28:33.013 )") 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.013 { 00:28:33.013 "params": { 00:28:33.013 "name": "Nvme$subsystem", 00:28:33.013 "trtype": "$TEST_TRANSPORT", 00:28:33.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.013 "adrfam": "ipv4", 00:28:33.013 "trsvcid": "$NVMF_PORT", 00:28:33.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.013 "hdgst": ${hdgst:-false}, 00:28:33.013 "ddgst": ${ddgst:-false} 00:28:33.013 }, 00:28:33.013 "method": "bdev_nvme_attach_controller" 00:28:33.013 } 00:28:33.013 EOF 00:28:33.013 )") 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.013 { 00:28:33.013 "params": { 00:28:33.013 "name": "Nvme$subsystem", 00:28:33.013 "trtype": "$TEST_TRANSPORT", 00:28:33.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.013 "adrfam": "ipv4", 00:28:33.013 "trsvcid": "$NVMF_PORT", 00:28:33.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.013 "hdgst": ${hdgst:-false}, 00:28:33.013 "ddgst": ${ddgst:-false} 00:28:33.013 }, 00:28:33.013 "method": "bdev_nvme_attach_controller" 00:28:33.013 } 00:28:33.013 EOF 00:28:33.013 )") 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.013 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.013 { 00:28:33.013 "params": { 00:28:33.013 "name": "Nvme$subsystem", 00:28:33.013 "trtype": "$TEST_TRANSPORT", 00:28:33.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.013 "adrfam": "ipv4", 00:28:33.013 "trsvcid": "$NVMF_PORT", 00:28:33.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.014 { 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme$subsystem", 00:28:33.014 "trtype": "$TEST_TRANSPORT", 00:28:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "$NVMF_PORT", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.014 { 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme$subsystem", 00:28:33.014 "trtype": "$TEST_TRANSPORT", 00:28:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "$NVMF_PORT", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.014 { 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme$subsystem", 00:28:33.014 "trtype": "$TEST_TRANSPORT", 00:28:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "$NVMF_PORT", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.014 { 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme$subsystem", 00:28:33.014 "trtype": "$TEST_TRANSPORT", 00:28:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "$NVMF_PORT", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 [2024-11-05 16:52:39.824018] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:33.014 [2024-11-05 16:52:39.824088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241436 ] 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.014 { 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme$subsystem", 00:28:33.014 "trtype": "$TEST_TRANSPORT", 00:28:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "$NVMF_PORT", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:33.014 { 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme$subsystem", 00:28:33.014 "trtype": "$TEST_TRANSPORT", 00:28:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "$NVMF_PORT", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.014 "hdgst": ${hdgst:-false}, 00:28:33.014 "ddgst": ${ddgst:-false} 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 } 00:28:33.014 EOF 00:28:33.014 )") 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:28:33.014 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme1", 00:28:33.014 "trtype": "tcp", 00:28:33.014 "traddr": "10.0.0.2", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "4420", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.014 "hdgst": false, 00:28:33.014 "ddgst": false 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 },{ 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme2", 00:28:33.014 "trtype": "tcp", 00:28:33.014 "traddr": "10.0.0.2", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "4420", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:33.014 "hdgst": false, 00:28:33.014 "ddgst": false 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 },{ 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme3", 00:28:33.014 "trtype": "tcp", 00:28:33.014 "traddr": "10.0.0.2", 00:28:33.014 "adrfam": "ipv4", 00:28:33.014 "trsvcid": "4420", 00:28:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:33.014 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:33.014 "hdgst": false, 00:28:33.014 "ddgst": false 00:28:33.014 }, 00:28:33.014 "method": "bdev_nvme_attach_controller" 00:28:33.014 },{ 00:28:33.014 "params": { 00:28:33.014 "name": "Nvme4", 00:28:33.014 "trtype": "tcp", 00:28:33.014 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 },{ 00:28:33.015 "params": { 00:28:33.015 "name": "Nvme5", 00:28:33.015 "trtype": "tcp", 00:28:33.015 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 },{ 00:28:33.015 "params": { 00:28:33.015 "name": "Nvme6", 00:28:33.015 "trtype": "tcp", 00:28:33.015 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 },{ 00:28:33.015 "params": { 00:28:33.015 "name": "Nvme7", 00:28:33.015 "trtype": "tcp", 00:28:33.015 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 },{ 00:28:33.015 "params": { 00:28:33.015 "name": "Nvme8", 00:28:33.015 "trtype": "tcp", 00:28:33.015 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 },{ 00:28:33.015 "params": { 00:28:33.015 "name": "Nvme9", 00:28:33.015 "trtype": "tcp", 00:28:33.015 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 },{ 00:28:33.015 "params": { 00:28:33.015 "name": "Nvme10", 00:28:33.015 "trtype": "tcp", 00:28:33.015 "traddr": "10.0.0.2", 00:28:33.015 "adrfam": "ipv4", 00:28:33.015 "trsvcid": "4420", 00:28:33.015 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:33.015 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:33.015 "hdgst": false, 00:28:33.015 "ddgst": false 00:28:33.015 }, 00:28:33.015 "method": "bdev_nvme_attach_controller" 00:28:33.015 }' 00:28:33.015 [2024-11-05 16:52:39.898494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.015 [2024-11-05 16:52:39.934508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.401 Running I/O for 1 seconds... 00:28:35.789 1869.00 IOPS, 116.81 MiB/s 00:28:35.789 Latency(us) 00:28:35.789 [2024-11-05T15:52:42.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.789 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme1n1 : 1.11 233.82 14.61 0.00 0.00 269147.32 6280.53 244667.73 00:28:35.789 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme2n1 : 1.15 221.81 13.86 0.00 0.00 280928.21 18459.31 248162.99 00:28:35.789 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme3n1 : 1.08 236.28 14.77 0.00 0.00 258676.91 19879.25 265639.25 00:28:35.789 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme4n1 : 1.18 270.95 16.93 0.00 0.00 221103.19 13653.33 244667.73 00:28:35.789 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme5n1 : 1.15 223.58 13.97 0.00 0.00 264359.68 19333.12 246415.36 00:28:35.789 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme6n1 : 1.15 222.61 13.91 0.00 0.00 261081.17 21954.56 244667.73 00:28:35.789 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme7n1 : 1.19 269.25 16.83 0.00 0.00 212524.20 12997.97 230686.72 00:28:35.789 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme8n1 : 1.19 267.86 16.74 0.00 0.00 210167.13 13598.72 249910.61 00:28:35.789 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme9n1 : 1.18 219.44 13.71 0.00 0.00 250242.60 2757.97 283115.52 00:28:35.789 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.789 Verification LBA range: start 0x0 length 0x400 00:28:35.789 Nvme10n1 : 1.20 265.66 16.60 0.00 0.00 204587.69 11960.32 270882.13 00:28:35.789 [2024-11-05T15:52:42.852Z] =================================================================================================================== 00:28:35.789 [2024-11-05T15:52:42.852Z] Total : 2431.25 151.95 0.00 0.00 240501.22 2757.97 283115.52 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:35.789 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:35.789 rmmod nvme_tcp 00:28:35.789 rmmod nvme_fabrics 00:28:35.789 rmmod nvme_keyring 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 3240682 ']' 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 3240682 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3240682 ']' 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3240682 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3240682 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:36.050 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:36.051 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3240682' 00:28:36.051 killing process with pid 3240682 00:28:36.051 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3240682 00:28:36.051 16:52:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3240682 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@254 -- # local dev 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:36.312 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # return 0 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:38.227 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@274 -- # iptr 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-save 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-restore 00:28:38.228 00:28:38.228 real 0m16.780s 00:28:38.228 user 0m34.566s 00:28:38.228 sys 0m6.666s 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:38.228 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:38.228 ************************************ 00:28:38.228 END TEST nvmf_shutdown_tc1 00:28:38.228 ************************************ 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:38.490 ************************************ 00:28:38.490 START TEST nvmf_shutdown_tc2 00:28:38.490 ************************************ 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.490 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:38.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:38.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:38.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:38.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:38.491 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:38.492 10.0.0.1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:38.492 10.0.0.2 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.492 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:38.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.577 ms 00:28:38.755 00:28:38.755 --- 10.0.0.1 ping statistics --- 00:28:38.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.755 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:38.755 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:38.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:28:38.756 00:28:38.756 --- 10.0.0.2 ping statistics --- 00:28:38.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.756 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:38.756 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:38.757 ' 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:38.757 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=3242833 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 3242833 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3242833 ']' 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.019 16:52:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.019 [2024-11-05 16:52:45.893590] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:39.019 [2024-11-05 16:52:45.893654] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.019 [2024-11-05 16:52:45.989781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.019 [2024-11-05 16:52:46.028821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.019 [2024-11-05 16:52:46.028864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.019 [2024-11-05 16:52:46.028869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.019 [2024-11-05 16:52:46.028874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.019 [2024-11-05 16:52:46.028879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.019 [2024-11-05 16:52:46.030587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.019 [2024-11-05 16:52:46.030765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.019 [2024-11-05 16:52:46.030905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.019 [2024-11-05 16:52:46.030907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.012 [2024-11-05 16:52:46.751243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.012 16:52:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.012 Malloc1 00:28:40.012 [2024-11-05 16:52:46.861498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.012 Malloc2 00:28:40.012 Malloc3 00:28:40.012 Malloc4 00:28:40.012 Malloc5 00:28:40.012 Malloc6 00:28:40.012 Malloc7 00:28:40.273 Malloc8 00:28:40.273 Malloc9 00:28:40.273 Malloc10 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3243048 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3243048 /var/tmp/bdevperf.sock 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3243048 ']' 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.273 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.273 { 00:28:40.273 "params": { 00:28:40.273 "name": "Nvme$subsystem", 00:28:40.273 "trtype": "$TEST_TRANSPORT", 00:28:40.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.273 "adrfam": "ipv4", 00:28:40.273 "trsvcid": "$NVMF_PORT", 00:28:40.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.273 "hdgst": ${hdgst:-false}, 00:28:40.273 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 [2024-11-05 16:52:47.323056] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:40.274 [2024-11-05 16:52:47.323126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243048 ] 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.274 { 00:28:40.274 "params": { 00:28:40.274 "name": "Nvme$subsystem", 00:28:40.274 "trtype": "$TEST_TRANSPORT", 00:28:40.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.274 "adrfam": "ipv4", 00:28:40.274 "trsvcid": "$NVMF_PORT", 00:28:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.274 "hdgst": ${hdgst:-false}, 00:28:40.274 "ddgst": ${ddgst:-false} 00:28:40.274 }, 00:28:40.274 "method": "bdev_nvme_attach_controller" 00:28:40.274 } 00:28:40.274 EOF 00:28:40.274 )") 00:28:40.274 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.535 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.535 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.535 { 00:28:40.535 "params": { 00:28:40.535 "name": "Nvme$subsystem", 00:28:40.535 "trtype": "$TEST_TRANSPORT", 00:28:40.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.535 "adrfam": "ipv4", 00:28:40.535 "trsvcid": "$NVMF_PORT", 00:28:40.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.535 "hdgst": ${hdgst:-false}, 00:28:40.535 "ddgst": ${ddgst:-false} 00:28:40.535 }, 00:28:40.535 "method": "bdev_nvme_attach_controller" 00:28:40.535 } 00:28:40.535 EOF 00:28:40.535 )") 00:28:40.535 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:28:40.535 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:28:40.535 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:28:40.535 16:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:40.535 "params": { 00:28:40.535 "name": "Nvme1", 00:28:40.535 "trtype": "tcp", 00:28:40.535 "traddr": "10.0.0.2", 00:28:40.535 "adrfam": "ipv4", 00:28:40.535 "trsvcid": "4420", 00:28:40.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.535 "hdgst": false, 00:28:40.535 "ddgst": false 00:28:40.535 }, 00:28:40.535 "method": "bdev_nvme_attach_controller" 00:28:40.535 },{ 00:28:40.535 "params": { 00:28:40.535 "name": "Nvme2", 00:28:40.535 "trtype": "tcp", 00:28:40.535 "traddr": "10.0.0.2", 00:28:40.535 "adrfam": "ipv4", 00:28:40.535 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme3", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme4", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme5", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme6", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme7", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme8", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme9", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 },{ 00:28:40.536 "params": { 00:28:40.536 "name": "Nvme10", 00:28:40.536 "trtype": "tcp", 00:28:40.536 "traddr": "10.0.0.2", 00:28:40.536 "adrfam": "ipv4", 00:28:40.536 "trsvcid": "4420", 00:28:40.536 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:40.536 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:40.536 "hdgst": false, 00:28:40.536 "ddgst": false 00:28:40.536 }, 00:28:40.536 "method": "bdev_nvme_attach_controller" 00:28:40.536 }' 00:28:40.536 [2024-11-05 16:52:47.396587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.536 [2024-11-05 16:52:47.433036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.922 Running I/O for 10 seconds... 00:28:41.922 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.922 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:28:41.922 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:41.922 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.922 16:52:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:42.183 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:42.444 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=139 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 139 -ge 100 ']' 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3243048 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3243048 ']' 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3243048 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3243048 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3243048' 00:28:42.706 killing process with pid 3243048 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3243048 00:28:42.706 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3243048 00:28:42.967 Received shutdown signal, test time was about 0.979912 seconds 00:28:42.967 00:28:42.967 Latency(us) 00:28:42.967 [2024-11-05T15:52:50.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.967 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme1n1 : 0.97 264.59 16.54 0.00 0.00 238991.36 18459.31 248162.99 00:28:42.967 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme2n1 : 0.94 203.19 12.70 0.00 0.00 304368.07 35826.35 251658.24 00:28:42.967 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme3n1 : 0.96 265.72 16.61 0.00 0.00 228269.87 18350.08 249910.61 00:28:42.967 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme4n1 : 0.97 263.99 16.50 0.00 0.00 224833.49 20097.71 244667.73 00:28:42.967 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme5n1 : 0.95 202.60 12.66 0.00 0.00 286472.82 18459.31 248162.99 00:28:42.967 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme6n1 : 0.97 263.00 16.44 0.00 0.00 215697.07 17585.49 267386.88 00:28:42.967 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme7n1 : 0.98 262.35 16.40 0.00 0.00 212013.01 17476.27 234181.97 00:28:42.967 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme8n1 : 0.98 257.40 16.09 0.00 0.00 210787.96 14854.83 249910.61 00:28:42.967 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme9n1 : 0.95 201.05 12.57 0.00 0.00 263219.20 14636.37 269134.51 00:28:42.967 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:42.967 Verification LBA range: start 0x0 length 0x400 00:28:42.967 Nvme10n1 : 0.96 200.25 12.52 0.00 0.00 258280.11 16711.68 258648.75 00:28:42.967 [2024-11-05T15:52:50.030Z] =================================================================================================================== 00:28:42.967 [2024-11-05T15:52:50.030Z] Total : 2384.15 149.01 0.00 0.00 240590.40 14636.37 269134.51 00:28:42.967 16:52:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3242833 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:43.997 16:52:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:43.997 rmmod nvme_tcp 00:28:43.997 rmmod nvme_fabrics 00:28:43.997 rmmod nvme_keyring 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 3242833 ']' 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 3242833 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3242833 ']' 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3242833 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:43.997 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3242833 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3242833' 00:28:44.309 killing process with pid 3242833 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3242833 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3242833 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@254 -- # local dev 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:44.309 16:52:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # return 0 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:46.859 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@274 -- # iptr 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-save 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-restore 00:28:46.860 00:28:46.860 real 0m8.051s 00:28:46.860 user 0m23.980s 00:28:46.860 sys 0m1.378s 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.860 ************************************ 00:28:46.860 END TEST nvmf_shutdown_tc2 00:28:46.860 ************************************ 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:46.860 ************************************ 00:28:46.860 START TEST nvmf_shutdown_tc3 00:28:46.860 ************************************ 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:46.860 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:46.861 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:46.861 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:46.861 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:46.861 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:46.861 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:46.862 10.0.0.1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:46.862 10.0.0.2 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:46.862 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:46.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.550 ms 00:28:46.863 00:28:46.863 --- 10.0.0.1 ping statistics --- 00:28:46.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.863 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:46.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:28:46.863 00:28:46.863 --- 10.0.0.2 ping statistics --- 00:28:46.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.863 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:46.863 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:46.864 ' 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:46.864 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=3244448 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 3244448 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3244448 ']' 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:47.126 16:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.126 [2024-11-05 16:52:54.024222] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:47.126 [2024-11-05 16:52:54.024288] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.126 [2024-11-05 16:52:54.119758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.126 [2024-11-05 16:52:54.155250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.126 [2024-11-05 16:52:54.155284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.126 [2024-11-05 16:52:54.155290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.126 [2024-11-05 16:52:54.155295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.126 [2024-11-05 16:52:54.155299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.126 [2024-11-05 16:52:54.156585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.126 [2024-11-05 16:52:54.156743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.126 [2024-11-05 16:52:54.156912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.126 [2024-11-05 16:52:54.157032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.068 [2024-11-05 16:52:54.871793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:48.068 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.069 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.069 Malloc1 00:28:48.069 [2024-11-05 16:52:54.989435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.069 Malloc2 00:28:48.069 Malloc3 00:28:48.069 Malloc4 00:28:48.069 Malloc5 00:28:48.330 Malloc6 00:28:48.330 Malloc7 00:28:48.330 Malloc8 00:28:48.330 Malloc9 00:28:48.330 Malloc10 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3244832 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3244832 /var/tmp/bdevperf.sock 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3244832 ']' 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.330 { 00:28:48.330 "params": { 00:28:48.330 "name": "Nvme$subsystem", 00:28:48.330 "trtype": "$TEST_TRANSPORT", 00:28:48.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.330 "adrfam": "ipv4", 00:28:48.330 "trsvcid": "$NVMF_PORT", 00:28:48.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.330 "hdgst": ${hdgst:-false}, 00:28:48.330 "ddgst": ${ddgst:-false} 00:28:48.330 }, 00:28:48.330 "method": "bdev_nvme_attach_controller" 00:28:48.330 } 00:28:48.330 EOF 00:28:48.330 )") 00:28:48.330 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 [2024-11-05 16:52:55.437450] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:48.592 [2024-11-05 16:52:55.437505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244832 ] 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:48.592 { 00:28:48.592 "params": { 00:28:48.592 "name": "Nvme$subsystem", 00:28:48.592 "trtype": "$TEST_TRANSPORT", 00:28:48.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.592 "adrfam": "ipv4", 00:28:48.592 "trsvcid": "$NVMF_PORT", 00:28:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.592 "hdgst": ${hdgst:-false}, 00:28:48.592 "ddgst": ${ddgst:-false} 00:28:48.592 }, 00:28:48.592 "method": "bdev_nvme_attach_controller" 00:28:48.592 } 00:28:48.592 EOF 00:28:48.592 )") 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:28:48.592 16:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:48.592 "params": { 00:28:48.593 "name": "Nvme1", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme2", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme3", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme4", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme5", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme6", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme7", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme8", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme9", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 },{ 00:28:48.593 "params": { 00:28:48.593 "name": "Nvme10", 00:28:48.593 "trtype": "tcp", 00:28:48.593 "traddr": "10.0.0.2", 00:28:48.593 "adrfam": "ipv4", 00:28:48.593 "trsvcid": "4420", 00:28:48.593 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:48.593 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:48.593 "hdgst": false, 00:28:48.593 "ddgst": false 00:28:48.593 }, 00:28:48.593 "method": "bdev_nvme_attach_controller" 00:28:48.593 }' 00:28:48.593 [2024-11-05 16:52:55.508781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.593 [2024-11-05 16:52:55.545528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.985 Running I/O for 10 seconds... 00:28:49.985 16:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:49.985 16:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:28:49.985 16:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:49.985 16:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.985 16:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:50.245 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:50.507 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3244448 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3244448 ']' 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3244448 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3244448 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3244448' 00:28:50.784 killing process with pid 3244448 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3244448 00:28:50.784 16:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3244448 00:28:50.784 [2024-11-05 16:52:57.773140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.784 [2024-11-05 16:52:57.773381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.773495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9640 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.785 [2024-11-05 16:52:57.774941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.774945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.774950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ac0a0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.776478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9b10 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.786 [2024-11-05 16:52:57.777958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.777997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.778182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9fe0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.787 [2024-11-05 16:52:57.779882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.779901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.779919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.779937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.779955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.779974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.779992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.780309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa4d0 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.788 [2024-11-05 16:52:57.781614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.781619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.781623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aad20 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.782985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.783785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab6e0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.789 [2024-11-05 16:52:57.784409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.784601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.786621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af5b0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.786763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ba090 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.786852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd9e0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.786942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.786985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.786994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.787001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.787008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0aa0 is same with the state(6) to be set 00:28:50.790 [2024-11-05 16:52:57.787032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.787040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.787048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.787056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.787065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.790 [2024-11-05 16:52:57.787072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.790 [2024-11-05 16:52:57.787080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085900 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.787117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084160 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.787203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b8e50 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.787291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6610 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.787379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.791 [2024-11-05 16:52:57.787434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.787441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108ecb0 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.793491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.793515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abbb0 is same with the state(6) to be set 00:28:50.791 [2024-11-05 16:52:57.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.795984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.795993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.791 [2024-11-05 16:52:57.796189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.791 [2024-11-05 16:52:57.796196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.792 [2024-11-05 16:52:57.796903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.792 [2024-11-05 16:52:57.796913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.796920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.796930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.796938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.796948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.796955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.796967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.796975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.796985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.796992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.793 [2024-11-05 16:52:57.797246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.793 [2024-11-05 16:52:57.797264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.793 [2024-11-05 16:52:57.797280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.793 [2024-11-05 16:52:57.797297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd5f0 is same with the state(6) to be set 00:28:50.793 [2024-11-05 16:52:57.797326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14af5b0 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ba090 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd9e0 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e0aa0 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1085900 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084160 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b8e50 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6610 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108ecb0 (9): Bad file descriptor 00:28:50.793 [2024-11-05 16:52:57.797670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.797988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.797995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.798013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.798023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.798030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.798041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.798049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.798058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.793 [2024-11-05 16:52:57.798066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.793 [2024-11-05 16:52:57.798077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.794 [2024-11-05 16:52:57.798772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.794 [2024-11-05 16:52:57.798780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.805203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.805236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.805249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.805258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.808418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:50.795 [2024-11-05 16:52:57.808459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:50.795 [2024-11-05 16:52:57.808494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd5f0 (9): Bad file descriptor 00:28:50.795 [2024-11-05 16:52:57.808771] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:50.795 [2024-11-05 16:52:57.809100] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:50.795 [2024-11-05 16:52:57.809189] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:50.795 [2024-11-05 16:52:57.809610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.795 [2024-11-05 16:52:57.809629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e0aa0 with addr=10.0.0.2, port=4420 00:28:50.795 [2024-11-05 16:52:57.809640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0aa0 is same with the state(6) to be set 00:28:50.795 [2024-11-05 16:52:57.810079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.795 [2024-11-05 16:52:57.810120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14af5b0 with addr=10.0.0.2, port=4420 00:28:50.795 [2024-11-05 16:52:57.810132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af5b0 is same with the state(6) to be set 00:28:50.795 [2024-11-05 16:52:57.810185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.795 [2024-11-05 16:52:57.810787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.795 [2024-11-05 16:52:57.810797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.810987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.810995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.811381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.796 [2024-11-05 16:52:57.812853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.796 [2024-11-05 16:52:57.812863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.812991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.812999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.797 [2024-11-05 16:52:57.813426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.797 [2024-11-05 16:52:57.813434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.813879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.813887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.798 [2024-11-05 16:52:57.815307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.798 [2024-11-05 16:52:57.815316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.815989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.815996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.816007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.799 [2024-11-05 16:52:57.816014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.799 [2024-11-05 16:52:57.816024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.816309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.816317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.800 [2024-11-05 16:52:57.817897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.800 [2024-11-05 16:52:57.817905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.817916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.817923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.817934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.817942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.817954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.817962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.817972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.817980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.817990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.817997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.801 [2024-11-05 16:52:57.818615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.801 [2024-11-05 16:52:57.818625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.818805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.802 [2024-11-05 16:52:57.820622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.802 [2024-11-05 16:52:57.820631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.820992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.820999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.803 [2024-11-05 16:52:57.821325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.803 [2024-11-05 16:52:57.821336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.821507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.821515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14954e0 is same with the state(6) to be set 00:28:50.804 [2024-11-05 16:52:57.822811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.822992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.822999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.804 [2024-11-05 16:52:57.823251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.804 [2024-11-05 16:52:57.823259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.805 [2024-11-05 16:52:57.823886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.805 [2024-11-05 16:52:57.823896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.823904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.823914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.823921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.823931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.823939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.823949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.823957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.823967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.823975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.823985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.823992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.824003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14907d0 is same with the state(6) to be set 00:28:50.806 [2024-11-05 16:52:57.825301] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:50.806 [2024-11-05 16:52:57.825352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.806 [2024-11-05 16:52:57.825932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.806 [2024-11-05 16:52:57.825942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.825950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.825960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.825968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.825978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.825987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.825998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.807 [2024-11-05 16:52:57.826510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.807 [2024-11-05 16:52:57.826518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cf560 is same with the state(6) to be set 00:28:50.807 [2024-11-05 16:52:57.827773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:50.807 [2024-11-05 16:52:57.827794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:50.807 [2024-11-05 16:52:57.827806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:50.807 [2024-11-05 16:52:57.827818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:50.807 [2024-11-05 16:52:57.827868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e0aa0 (9): Bad file descriptor 00:28:50.808 [2024-11-05 16:52:57.827883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14af5b0 (9): Bad file descriptor 00:28:50.808 [2024-11-05 16:52:57.827923] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:50.808 [2024-11-05 16:52:57.827937] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:50.808 [2024-11-05 16:52:57.827954] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:50.808 [2024-11-05 16:52:57.827968] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:50.808 [2024-11-05 16:52:57.827979] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:50.808 [2024-11-05 16:52:57.828095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:50.808 [2024-11-05 16:52:57.828110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:50.808 [2024-11-05 16:52:57.828120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:50.808 [2024-11-05 16:52:57.828510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.808 [2024-11-05 16:52:57.828527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108ecb0 with addr=10.0.0.2, port=4420 00:28:50.808 [2024-11-05 16:52:57.828536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108ecb0 is same with the state(6) to be set 00:28:50.808 [2024-11-05 16:52:57.828974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.808 [2024-11-05 16:52:57.829018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1085900 with addr=10.0.0.2, port=4420 00:28:50.808 [2024-11-05 16:52:57.829029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085900 is same with the state(6) to be set 00:28:50.808 [2024-11-05 16:52:57.829352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.808 [2024-11-05 16:52:57.829366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1084160 with addr=10.0.0.2, port=4420 00:28:50.808 [2024-11-05 16:52:57.829379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084160 is same with the state(6) to be set 00:28:50.808 [2024-11-05 16:52:57.829694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.808 [2024-11-05 16:52:57.829705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b8e50 with addr=10.0.0.2, port=4420 00:28:50.808 [2024-11-05 16:52:57.829712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b8e50 is same with the state(6) to be set 00:28:50.808 [2024-11-05 16:52:57.829721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:50.808 [2024-11-05 16:52:57.829729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:50.808 [2024-11-05 16:52:57.829738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:50.808 [2024-11-05 16:52:57.829755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:50.808 [2024-11-05 16:52:57.829764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:50.808 [2024-11-05 16:52:57.829771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:50.808 [2024-11-05 16:52:57.829778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:50.808 [2024-11-05 16:52:57.829784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:50.808 [2024-11-05 16:52:57.831404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.808 [2024-11-05 16:52:57.831889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.808 [2024-11-05 16:52:57.831900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.831907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.831917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.831935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.831942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.831953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.831961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.831971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.831980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.831989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.831997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.809 [2024-11-05 16:52:57.832579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.809 [2024-11-05 16:52:57.832588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491a90 is same with the state(6) to be set 00:28:51.072 task offset: 32000 on job bdev=Nvme7n1 fails 00:28:51.072 00:28:51.072 Latency(us) 00:28:51.072 [2024-11-05T15:52:58.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.072 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme1n1 ended in about 0.96 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme1n1 : 0.96 200.68 12.54 66.89 0.00 236536.53 20753.07 272629.76 00:28:51.072 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme2n1 ended in about 0.96 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme2n1 : 0.96 133.44 8.34 66.72 0.00 310017.42 19988.48 255153.49 00:28:51.072 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme3n1 ended in about 0.96 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme3n1 : 0.96 199.65 12.48 66.55 0.00 228320.00 39321.60 223696.21 00:28:51.072 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme4n1 ended in about 0.96 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme4n1 : 0.96 199.14 12.45 66.38 0.00 224252.80 21299.20 246415.36 00:28:51.072 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme5n1 ended in about 0.95 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme5n1 : 0.95 201.61 12.60 67.20 0.00 216557.01 15947.09 255153.49 00:28:51.072 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme6n1 ended in about 0.97 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme6n1 : 0.97 146.87 9.18 66.19 0.00 267761.22 19879.25 269134.51 00:28:51.072 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme7n1 ended in about 0.95 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme7n1 : 0.95 201.93 12.62 67.31 0.00 206708.05 17585.49 248162.99 00:28:51.072 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme8n1 ended in about 0.97 seconds with error 00:28:51.072 Verification LBA range: start 0x0 length 0x400 00:28:51.072 Nvme8n1 : 0.97 198.07 12.38 66.02 0.00 206618.03 20971.52 242920.11 00:28:51.072 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.072 Job: Nvme9n1 ended in about 0.98 seconds with error 00:28:51.073 Verification LBA range: start 0x0 length 0x400 00:28:51.073 Nvme9n1 : 0.98 130.89 8.18 65.45 0.00 272249.17 29272.75 265639.25 00:28:51.073 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.073 Job: Nvme10n1 ended in about 0.97 seconds with error 00:28:51.073 Verification LBA range: start 0x0 length 0x400 00:28:51.073 Nvme10n1 : 0.97 131.71 8.23 65.85 0.00 264120.04 18568.53 302339.41 00:28:51.073 [2024-11-05T15:52:58.136Z] =================================================================================================================== 00:28:51.073 [2024-11-05T15:52:58.136Z] Total : 1743.99 109.00 664.57 0.00 239571.66 15947.09 302339.41 00:28:51.073 [2024-11-05 16:52:57.858213] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.073 [2024-11-05 16:52:57.858253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.858692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.858710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa6610 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.858721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6610 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.859135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.859179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ba090 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.859190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ba090 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.859528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.859542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dd9e0 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.859550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd9e0 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.859565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108ecb0 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.859579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1085900 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.859588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084160 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.859598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b8e50 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.860078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.860095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dd5f0 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.860103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dd5f0 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.860112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa6610 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.860127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ba090 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.860136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd9e0 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.860145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.860181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.860210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.860238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.860310] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:51.073 [2024-11-05 16:52:57.860323] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:51.073 [2024-11-05 16:52:57.860335] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:51.073 [2024-11-05 16:52:57.860730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dd5f0 (9): Bad file descriptor 00:28:51.073 [2024-11-05 16:52:57.860744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.860777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.860810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.860817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.860825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.860832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.861085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.861100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.861109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.861119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.861128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.861137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:51.073 [2024-11-05 16:52:57.861182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:51.073 [2024-11-05 16:52:57.861190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:51.073 [2024-11-05 16:52:57.861198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:51.073 [2024-11-05 16:52:57.861205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:51.073 [2024-11-05 16:52:57.861588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.861603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14af5b0 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.861611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af5b0 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.861920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.861933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e0aa0 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.861940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0aa0 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.862257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.862269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b8e50 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.862276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b8e50 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.862640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1084160 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.862647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084160 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.862974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.862985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1085900 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.862995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085900 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.863309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.073 [2024-11-05 16:52:57.863320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108ecb0 with addr=10.0.0.2, port=4420 00:28:51.073 [2024-11-05 16:52:57.863327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108ecb0 is same with the state(6) to be set 00:28:51.073 [2024-11-05 16:52:57.863359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14af5b0 (9): Bad file descriptor 00:28:51.074 [2024-11-05 16:52:57.863369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e0aa0 (9): Bad file descriptor 00:28:51.074 [2024-11-05 16:52:57.863378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b8e50 (9): Bad file descriptor 00:28:51.074 [2024-11-05 16:52:57.863387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084160 (9): Bad file descriptor 00:28:51.074 [2024-11-05 16:52:57.863397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1085900 (9): Bad file descriptor 00:28:51.074 [2024-11-05 16:52:57.863407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108ecb0 (9): Bad file descriptor 00:28:51.074 [2024-11-05 16:52:57.863434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:51.074 [2024-11-05 16:52:57.863442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:51.074 [2024-11-05 16:52:57.863449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:51.074 [2024-11-05 16:52:57.863456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:51.074 [2024-11-05 16:52:57.863463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:51.074 [2024-11-05 16:52:57.863469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:51.074 [2024-11-05 16:52:57.863476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:51.074 [2024-11-05 16:52:57.863483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:51.074 [2024-11-05 16:52:57.863491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:51.074 [2024-11-05 16:52:57.863498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:51.074 [2024-11-05 16:52:57.863505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:51.074 [2024-11-05 16:52:57.863511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:51.074 [2024-11-05 16:52:57.863519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:51.074 [2024-11-05 16:52:57.863526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:51.074 [2024-11-05 16:52:57.863533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:51.074 [2024-11-05 16:52:57.863540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:51.074 [2024-11-05 16:52:57.863547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:51.074 [2024-11-05 16:52:57.863553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:51.074 [2024-11-05 16:52:57.863564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:51.074 [2024-11-05 16:52:57.863570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:51.074 [2024-11-05 16:52:57.863578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:51.074 [2024-11-05 16:52:57.863584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:51.074 [2024-11-05 16:52:57.863591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:51.074 [2024-11-05 16:52:57.863598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:51.074 16:52:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3244832 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3244832 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3244832 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:52.017 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:52.017 rmmod nvme_tcp 00:28:52.017 rmmod nvme_fabrics 00:28:52.278 rmmod nvme_keyring 00:28:52.278 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:52.278 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:28:52.278 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:28:52.278 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 3244448 ']' 00:28:52.278 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 3244448 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3244448 ']' 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3244448 00:28:52.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3244448) - No such process 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3244448 is not found' 00:28:52.279 Process with pid 3244448 is not found 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@254 -- # local dev 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:52.279 16:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # return 0 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@274 -- # iptr 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-save 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-restore 00:28:54.194 00:28:54.194 real 0m7.733s 00:28:54.194 user 0m18.337s 00:28:54.194 sys 0m1.281s 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.194 ************************************ 00:28:54.194 END TEST nvmf_shutdown_tc3 00:28:54.194 ************************************ 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:54.194 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:54.456 ************************************ 00:28:54.456 START TEST nvmf_shutdown_tc4 00:28:54.456 ************************************ 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:54.456 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:54.457 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:54.457 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:54.457 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:54.457 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@247 -- # create_target_ns 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:54.457 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:54.458 10.0.0.1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:54.458 10.0.0.2 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:54.458 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:54.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.609 ms 00:28:54.720 00:28:54.720 --- 10.0.0.1 ping statistics --- 00:28:54.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.720 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:54.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:28:54.720 00:28:54.720 --- 10.0.0.2 ping statistics --- 00:28:54.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.720 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:28:54.720 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:54.721 ' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:54.721 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=3246307 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 3246307 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3246307 ']' 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.982 16:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:54.982 [2024-11-05 16:53:01.858399] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:28:54.982 [2024-11-05 16:53:01.858493] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.982 [2024-11-05 16:53:01.954342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.982 [2024-11-05 16:53:01.988023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.982 [2024-11-05 16:53:01.988068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.982 [2024-11-05 16:53:01.988074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.982 [2024-11-05 16:53:01.988079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.982 [2024-11-05 16:53:01.988083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.982 [2024-11-05 16:53:01.989350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.982 [2024-11-05 16:53:01.989509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.982 [2024-11-05 16:53:01.989626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.982 [2024-11-05 16:53:01.989628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.923 [2024-11-05 16:53:02.704027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.923 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.924 16:53:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.924 Malloc1 00:28:55.924 [2024-11-05 16:53:02.809118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.924 Malloc2 00:28:55.924 Malloc3 00:28:55.924 Malloc4 00:28:55.924 Malloc5 00:28:55.924 Malloc6 00:28:56.184 Malloc7 00:28:56.184 Malloc8 00:28:56.184 Malloc9 00:28:56.184 Malloc10 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3246539 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:56.184 16:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:56.445 [2024-11-05 16:53:03.274304] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3246307 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3246307 ']' 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3246307 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3246307 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3246307' 00:29:01.744 killing process with pid 3246307 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3246307 00:29:01.744 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3246307 00:29:01.744 [2024-11-05 16:53:08.288314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefc20 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.288998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.289154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef750 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0f60 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0f60 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0f60 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0f60 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1430 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1430 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1430 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1430 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.293786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1430 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1900 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:29:01.744 [2024-11-05 16:53:08.294392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:29:01.745 [2024-11-05 16:53:08.294398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 [2024-11-05 16:53:08.295968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 [2024-11-05 16:53:08.296815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 [2024-11-05 16:53:08.297694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.298160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf22a0 is same with the state(6) to be set 00:29:01.746 starting I/O failed: -6 00:29:01.746 [2024-11-05 16:53:08.298177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf22a0 is same with the state(6) to be set 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.298182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf22a0 is same with the state(6) to be set 00:29:01.746 starting I/O failed: -6 00:29:01.746 [2024-11-05 16:53:08.298188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf22a0 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf22a0 is same with the state(6) to be set 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.298199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf22a0 is same with the state(6) to be set 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.298539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2770 is same with tstarting I/O failed: -6 00:29:01.746 he state(6) to be set 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.298557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2770 is same with the state(6) to be set 00:29:01.746 starting I/O failed: -6 00:29:01.746 [2024-11-05 16:53:08.298567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2770 is same with the state(6) to be set 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.298573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2770 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2770 is same with the state(6) to be set 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 [2024-11-05 16:53:08.298794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.298840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf2c40 is same with the state(6) to be set 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 [2024-11-05 16:53:08.299109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.746 NVMe io qpair process completion error 00:29:01.746 [2024-11-05 16:53:08.299337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1dd0 is same with the state(6) to be set 00:29:01.746 [2024-11-05 16:53:08.299354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1dd0 is same with the state(6) to be set 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 [2024-11-05 16:53:08.300154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.746 starting I/O failed: -6 00:29:01.746 starting I/O failed: -6 00:29:01.746 starting I/O failed: -6 00:29:01.746 starting I/O failed: -6 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed: -6 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 [2024-11-05 16:53:08.301091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 [2024-11-05 16:53:08.302049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.747 Write completed with error (sct=0, sc=8) 00:29:01.747 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 [2024-11-05 16:53:08.303532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.748 NVMe io qpair process completion error 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 [2024-11-05 16:53:08.304690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 [2024-11-05 16:53:08.305584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.748 starting I/O failed: -6 00:29:01.748 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 [2024-11-05 16:53:08.306481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 [2024-11-05 16:53:08.308897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.749 NVMe io qpair process completion error 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 Write completed with error (sct=0, sc=8) 00:29:01.749 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 [2024-11-05 16:53:08.309987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 [2024-11-05 16:53:08.310896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 [2024-11-05 16:53:08.311818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.750 starting I/O failed: -6 00:29:01.750 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 [2024-11-05 16:53:08.313410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.751 NVMe io qpair process completion error 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 [2024-11-05 16:53:08.314554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 starting I/O failed: -6 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.751 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 [2024-11-05 16:53:08.315466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 [2024-11-05 16:53:08.316376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.752 [2024-11-05 16:53:08.317975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.752 NVMe io qpair process completion error 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 Write completed with error (sct=0, sc=8) 00:29:01.752 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 [2024-11-05 16:53:08.318942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 [2024-11-05 16:53:08.319740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 [2024-11-05 16:53:08.320680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.753 starting I/O failed: -6 00:29:01.753 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 [2024-11-05 16:53:08.323307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.754 NVMe io qpair process completion error 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 [2024-11-05 16:53:08.324514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 Write completed with error (sct=0, sc=8) 00:29:01.754 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 [2024-11-05 16:53:08.325325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 [2024-11-05 16:53:08.326253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.755 starting I/O failed: -6 00:29:01.755 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 [2024-11-05 16:53:08.327707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.756 NVMe io qpair process completion error 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 [2024-11-05 16:53:08.330113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.756 starting I/O failed: -6 00:29:01.756 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 [2024-11-05 16:53:08.332758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.757 NVMe io qpair process completion error 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 [2024-11-05 16:53:08.333937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 starting I/O failed: -6 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.757 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 [2024-11-05 16:53:08.334766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 [2024-11-05 16:53:08.335687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 Write completed with error (sct=0, sc=8) 00:29:01.758 starting I/O failed: -6 00:29:01.758 [2024-11-05 16:53:08.337495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.759 NVMe io qpair process completion error 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 [2024-11-05 16:53:08.338653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 [2024-11-05 16:53:08.339478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.759 Write completed with error (sct=0, sc=8) 00:29:01.759 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 [2024-11-05 16:53:08.340441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 Write completed with error (sct=0, sc=8) 00:29:01.760 starting I/O failed: -6 00:29:01.760 [2024-11-05 16:53:08.342328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.760 NVMe io qpair process completion error 00:29:01.760 Initializing NVMe Controllers 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:01.760 Controller IO queue size 128, less than required. 00:29:01.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:01.760 Initialization complete. Launching workers. 00:29:01.760 ======================================================== 00:29:01.760 Latency(us) 00:29:01.760 Device Information : IOPS MiB/s Average min max 00:29:01.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1874.20 80.53 68313.22 872.60 120650.90 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1875.88 80.60 68271.62 514.37 122654.58 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1905.03 81.86 67244.89 767.18 121445.21 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1875.25 80.58 68343.95 909.60 120932.49 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1892.24 81.31 67751.78 722.70 123361.17 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1889.30 81.18 67875.39 735.38 123558.24 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1884.27 80.96 68091.06 811.04 127273.00 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1892.66 81.33 67800.96 692.24 118502.41 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1879.44 80.76 68312.47 825.01 130548.73 00:29:01.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1915.10 82.29 67063.62 719.42 132025.98 00:29:01.761 ======================================================== 00:29:01.761 Total : 18883.37 811.39 67903.98 514.37 132025.98 00:29:01.761 00:29:01.761 [2024-11-05 16:53:08.347370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f560 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fbc0 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020410 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020740 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021ae0 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021900 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fef0 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2021720 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f890 is same with the state(6) to be set 00:29:01.761 [2024-11-05 16:53:08.347646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020a70 is same with the state(6) to be set 00:29:01.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:01.761 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3246539 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3246539 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3246539 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:02.705 rmmod nvme_tcp 00:29:02.705 rmmod nvme_fabrics 00:29:02.705 rmmod nvme_keyring 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 3246307 ']' 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 3246307 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3246307 ']' 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3246307 00:29:02.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3246307) - No such process 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3246307 is not found' 00:29:02.705 Process with pid 3246307 is not found 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@254 -- # local dev 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:02.705 16:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # return 0 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:29:04.619 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@274 -- # iptr 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-save 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-restore 00:29:04.881 00:29:04.881 real 0m10.404s 00:29:04.881 user 0m28.253s 00:29:04.881 sys 0m3.917s 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:04.881 ************************************ 00:29:04.881 END TEST nvmf_shutdown_tc4 00:29:04.881 ************************************ 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:04.881 00:29:04.881 real 0m43.542s 00:29:04.881 user 1m45.377s 00:29:04.881 sys 0m13.612s 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.881 ************************************ 00:29:04.881 END TEST nvmf_shutdown 00:29:04.881 ************************************ 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:04.881 ************************************ 00:29:04.881 START TEST nvmf_nsid 00:29:04.881 ************************************ 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:04.881 * Looking for test storage... 00:29:04.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:29:04.881 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:05.144 16:53:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.144 --rc genhtml_branch_coverage=1 00:29:05.144 --rc genhtml_function_coverage=1 00:29:05.144 --rc genhtml_legend=1 00:29:05.144 --rc geninfo_all_blocks=1 00:29:05.144 --rc geninfo_unexecuted_blocks=1 00:29:05.144 00:29:05.144 ' 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.144 --rc genhtml_branch_coverage=1 00:29:05.144 --rc genhtml_function_coverage=1 00:29:05.144 --rc genhtml_legend=1 00:29:05.144 --rc geninfo_all_blocks=1 00:29:05.144 --rc geninfo_unexecuted_blocks=1 00:29:05.144 00:29:05.144 ' 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.144 --rc genhtml_branch_coverage=1 00:29:05.144 --rc genhtml_function_coverage=1 00:29:05.144 --rc genhtml_legend=1 00:29:05.144 --rc geninfo_all_blocks=1 00:29:05.144 --rc geninfo_unexecuted_blocks=1 00:29:05.144 00:29:05.144 ' 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.144 --rc genhtml_branch_coverage=1 00:29:05.144 --rc genhtml_function_coverage=1 00:29:05.144 --rc genhtml_legend=1 00:29:05.144 --rc geninfo_all_blocks=1 00:29:05.144 --rc geninfo_unexecuted_blocks=1 00:29:05.144 00:29:05.144 ' 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.144 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:05.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:29:05.145 16:53:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:13.296 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:13.296 16:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:13.296 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:13.296 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:13.296 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:13.297 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@247 -- # create_target_ns 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:13.297 10.0.0.1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:13.297 10.0.0.2 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:13.297 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:13.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.579 ms 00:29:13.298 00:29:13.298 --- 10.0.0.1 ping statistics --- 00:29:13.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.298 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:13.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:29:13.298 00:29:13.298 --- 10.0.0.2 ping statistics --- 00:29:13.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.298 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:29:13.298 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:13.299 ' 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=3252064 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 3252064 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3252064 ']' 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.299 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 [2024-11-05 16:53:19.544655] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:29:13.299 [2024-11-05 16:53:19.544711] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.299 [2024-11-05 16:53:19.624361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.299 [2024-11-05 16:53:19.661024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.299 [2024-11-05 16:53:19.661060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.299 [2024-11-05 16:53:19.661068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.299 [2024-11-05 16:53:19.661075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.299 [2024-11-05 16:53:19.661081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.299 [2024-11-05 16:53:19.661645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.299 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:13.299 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:29:13.299 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:13.299 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.299 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3252095 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8d3d096f-a069-4f88-846a-2a23b5ce6c92 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=07698d6f-1c3a-427b-8602-df2c24a85499 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c89f3449-7b2a-4293-a6dc-2f1690d077cd 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.561 [2024-11-05 16:53:20.441554] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:29:13.561 [2024-11-05 16:53:20.441604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252095 ] 00:29:13.561 null0 00:29:13.561 null1 00:29:13.561 null2 00:29:13.561 [2024-11-05 16:53:20.462358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.561 [2024-11-05 16:53:20.486548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.561 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3252095 /var/tmp/tgt2.sock 00:29:13.562 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3252095 ']' 00:29:13.562 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:13.562 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.562 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:13.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:13.562 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.562 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:13.562 [2024-11-05 16:53:20.530117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.562 [2024-11-05 16:53:20.566027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.822 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:13.822 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:29:13.822 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:14.140 [2024-11-05 16:53:21.053633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.140 [2024-11-05 16:53:21.069766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:14.140 nvme0n1 nvme0n2 00:29:14.140 nvme1n1 00:29:14.140 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:14.140 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:14.140 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:29:15.523 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8d3d096f-a069-4f88-846a-2a23b5ce6c92 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8d3d096fa0694f88846a2a23b5ce6c92 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8D3D096FA0694F88846A2A23B5CE6C92 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8D3D096FA0694F88846A2A23B5CE6C92 == \8\D\3\D\0\9\6\F\A\0\6\9\4\F\8\8\8\4\6\A\2\A\2\3\B\5\C\E\6\C\9\2 ]] 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 07698d6f-1c3a-427b-8602-df2c24a85499 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=07698d6f1c3a427b8602df2c24a85499 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 07698D6F1C3A427B8602DF2C24A85499 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 07698D6F1C3A427B8602DF2C24A85499 == \0\7\6\9\8\D\6\F\1\C\3\A\4\2\7\B\8\6\0\2\D\F\2\C\2\4\A\8\5\4\9\9 ]] 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c89f3449-7b2a-4293-a6dc-2f1690d077cd 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c89f34497b2a4293a6dc2f1690d077cd 00:29:16.908 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C89F34497B2A4293A6DC2F1690D077CD 00:29:16.909 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C89F34497B2A4293A6DC2F1690D077CD == \C\8\9\F\3\4\4\9\7\B\2\A\4\2\9\3\A\6\D\C\2\F\1\6\9\0\D\0\7\7\C\D ]] 00:29:16.909 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3252095 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3252095 ']' 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3252095 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3252095 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3252095' 00:29:17.170 killing process with pid 3252095 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3252095 00:29:17.170 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3252095 00:29:17.431 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:17.432 rmmod nvme_tcp 00:29:17.432 rmmod nvme_fabrics 00:29:17.432 rmmod nvme_keyring 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 3252064 ']' 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 3252064 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3252064 ']' 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3252064 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3252064 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3252064' 00:29:17.432 killing process with pid 3252064 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3252064 00:29:17.432 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3252064 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:17.693 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:29:19.608 00:29:19.608 real 0m14.849s 00:29:19.608 user 0m11.414s 00:29:19.608 sys 0m6.651s 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:19.608 16:53:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.608 ************************************ 00:29:19.608 END TEST nvmf_nsid 00:29:19.608 ************************************ 00:29:19.870 16:53:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:19.870 00:29:19.870 real 12m58.126s 00:29:19.870 user 27m7.173s 00:29:19.870 sys 3m51.239s 00:29:19.870 16:53:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:19.870 16:53:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:19.870 ************************************ 00:29:19.870 END TEST nvmf_target_extra 00:29:19.870 ************************************ 00:29:19.870 16:53:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:19.870 16:53:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:19.870 16:53:26 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:19.870 16:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.870 ************************************ 00:29:19.870 START TEST nvmf_host 00:29:19.870 ************************************ 00:29:19.870 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:19.870 * Looking for test storage... 00:29:19.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:19.870 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:19.870 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:19.870 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.133 --rc genhtml_branch_coverage=1 00:29:20.133 --rc genhtml_function_coverage=1 00:29:20.133 --rc genhtml_legend=1 00:29:20.133 --rc geninfo_all_blocks=1 00:29:20.133 --rc geninfo_unexecuted_blocks=1 00:29:20.133 00:29:20.133 ' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.133 --rc genhtml_branch_coverage=1 00:29:20.133 --rc genhtml_function_coverage=1 00:29:20.133 --rc genhtml_legend=1 00:29:20.133 --rc geninfo_all_blocks=1 00:29:20.133 --rc geninfo_unexecuted_blocks=1 00:29:20.133 00:29:20.133 ' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.133 --rc genhtml_branch_coverage=1 00:29:20.133 --rc genhtml_function_coverage=1 00:29:20.133 --rc genhtml_legend=1 00:29:20.133 --rc geninfo_all_blocks=1 00:29:20.133 --rc geninfo_unexecuted_blocks=1 00:29:20.133 00:29:20.133 ' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.133 --rc genhtml_branch_coverage=1 00:29:20.133 --rc genhtml_function_coverage=1 00:29:20.133 --rc genhtml_legend=1 00:29:20.133 --rc geninfo_all_blocks=1 00:29:20.133 --rc geninfo_unexecuted_blocks=1 00:29:20.133 00:29:20.133 ' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.133 16:53:26 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:20.134 16:53:26 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:20.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.134 ************************************ 00:29:20.134 START TEST nvmf_multicontroller 00:29:20.134 ************************************ 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.134 * Looking for test storage... 00:29:20.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:29:20.134 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.396 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.397 --rc genhtml_branch_coverage=1 00:29:20.397 --rc genhtml_function_coverage=1 00:29:20.397 --rc genhtml_legend=1 00:29:20.397 --rc geninfo_all_blocks=1 00:29:20.397 --rc geninfo_unexecuted_blocks=1 00:29:20.397 00:29:20.397 ' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.397 --rc genhtml_branch_coverage=1 00:29:20.397 --rc genhtml_function_coverage=1 00:29:20.397 --rc genhtml_legend=1 00:29:20.397 --rc geninfo_all_blocks=1 00:29:20.397 --rc geninfo_unexecuted_blocks=1 00:29:20.397 00:29:20.397 ' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.397 --rc genhtml_branch_coverage=1 00:29:20.397 --rc genhtml_function_coverage=1 00:29:20.397 --rc genhtml_legend=1 00:29:20.397 --rc geninfo_all_blocks=1 00:29:20.397 --rc geninfo_unexecuted_blocks=1 00:29:20.397 00:29:20.397 ' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.397 --rc genhtml_branch_coverage=1 00:29:20.397 --rc genhtml_function_coverage=1 00:29:20.397 --rc genhtml_legend=1 00:29:20.397 --rc geninfo_all_blocks=1 00:29:20.397 --rc geninfo_unexecuted_blocks=1 00:29:20.397 00:29:20.397 ' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:20.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:20.397 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:29:20.398 16:53:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:28.558 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:28.558 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:28.558 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:28.559 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:28.559 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@247 -- # create_target_ns 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:28.559 10.0.0.1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:28.559 10.0.0.2 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:28.559 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:28.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.665 ms 00:29:28.560 00:29:28.560 --- 10.0.0.1 ping statistics --- 00:29:28.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.560 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:28.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:29:28.560 00:29:28.560 --- 10.0.0.2 ping statistics --- 00:29:28.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.560 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:28.560 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:28.561 ' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=3257257 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 3257257 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3257257 ']' 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:28.561 16:53:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.561 [2024-11-05 16:53:34.968175] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:29:28.561 [2024-11-05 16:53:34.968247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.561 [2024-11-05 16:53:35.067093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:28.561 [2024-11-05 16:53:35.119346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.561 [2024-11-05 16:53:35.119397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.561 [2024-11-05 16:53:35.119406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.561 [2024-11-05 16:53:35.119413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.561 [2024-11-05 16:53:35.119419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.561 [2024-11-05 16:53:35.121503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.561 [2024-11-05 16:53:35.121674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.561 [2024-11-05 16:53:35.121674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 [2024-11-05 16:53:35.812991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 Malloc0 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 [2024-11-05 16:53:35.876730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.085 [2024-11-05 16:53:35.888667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.085 Malloc1 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:29.085 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3257567 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3257567 /var/tmp/bdevperf.sock 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3257567 ']' 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.086 16:53:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.031 NVMe0n1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.031 1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.031 request: 00:29:30.031 { 00:29:30.031 "name": "NVMe0", 00:29:30.031 "trtype": "tcp", 00:29:30.031 "traddr": "10.0.0.2", 00:29:30.031 "adrfam": "ipv4", 00:29:30.031 "trsvcid": "4420", 00:29:30.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.031 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:30.031 "hostaddr": "10.0.0.1", 00:29:30.031 "prchk_reftag": false, 00:29:30.031 "prchk_guard": false, 00:29:30.031 "hdgst": false, 00:29:30.031 "ddgst": false, 00:29:30.031 "allow_unrecognized_csi": false, 00:29:30.031 "method": "bdev_nvme_attach_controller", 00:29:30.031 "req_id": 1 00:29:30.031 } 00:29:30.031 Got JSON-RPC error response 00:29:30.031 response: 00:29:30.031 { 00:29:30.031 "code": -114, 00:29:30.031 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:30.031 } 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.031 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.031 request: 00:29:30.031 { 00:29:30.031 "name": "NVMe0", 00:29:30.031 "trtype": "tcp", 00:29:30.031 "traddr": "10.0.0.2", 00:29:30.031 "adrfam": "ipv4", 00:29:30.031 "trsvcid": "4420", 00:29:30.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:30.031 "hostaddr": "10.0.0.1", 00:29:30.031 "prchk_reftag": false, 00:29:30.031 "prchk_guard": false, 00:29:30.031 "hdgst": false, 00:29:30.031 "ddgst": false, 00:29:30.031 "allow_unrecognized_csi": false, 00:29:30.031 "method": "bdev_nvme_attach_controller", 00:29:30.031 "req_id": 1 00:29:30.031 } 00:29:30.031 Got JSON-RPC error response 00:29:30.031 response: 00:29:30.031 { 00:29:30.031 "code": -114, 00:29:30.031 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:30.032 } 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.032 16:53:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.032 request: 00:29:30.032 { 00:29:30.032 "name": "NVMe0", 00:29:30.032 "trtype": "tcp", 00:29:30.032 "traddr": "10.0.0.2", 00:29:30.032 "adrfam": "ipv4", 00:29:30.032 "trsvcid": "4420", 00:29:30.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.032 "hostaddr": "10.0.0.1", 00:29:30.032 "prchk_reftag": false, 00:29:30.032 "prchk_guard": false, 00:29:30.032 "hdgst": false, 00:29:30.032 "ddgst": false, 00:29:30.032 "multipath": "disable", 00:29:30.032 "allow_unrecognized_csi": false, 00:29:30.032 "method": "bdev_nvme_attach_controller", 00:29:30.032 "req_id": 1 00:29:30.032 } 00:29:30.032 Got JSON-RPC error response 00:29:30.032 response: 00:29:30.032 { 00:29:30.032 "code": -114, 00:29:30.032 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:30.032 } 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.032 request: 00:29:30.032 { 00:29:30.032 "name": "NVMe0", 00:29:30.032 "trtype": "tcp", 00:29:30.032 "traddr": "10.0.0.2", 00:29:30.032 "adrfam": "ipv4", 00:29:30.032 "trsvcid": "4420", 00:29:30.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.032 "hostaddr": "10.0.0.1", 00:29:30.032 "prchk_reftag": false, 00:29:30.032 "prchk_guard": false, 00:29:30.032 "hdgst": false, 00:29:30.032 "ddgst": false, 00:29:30.032 "multipath": "failover", 00:29:30.032 "allow_unrecognized_csi": false, 00:29:30.032 "method": "bdev_nvme_attach_controller", 00:29:30.032 "req_id": 1 00:29:30.032 } 00:29:30.032 Got JSON-RPC error response 00:29:30.032 response: 00:29:30.032 { 00:29:30.032 "code": -114, 00:29:30.032 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:30.032 } 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.032 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.294 NVMe0n1 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.294 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:30.294 16:53:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:31.683 { 00:29:31.683 "results": [ 00:29:31.683 { 00:29:31.683 "job": "NVMe0n1", 00:29:31.683 "core_mask": "0x1", 00:29:31.683 "workload": "write", 00:29:31.683 "status": "finished", 00:29:31.683 "queue_depth": 128, 00:29:31.683 "io_size": 4096, 00:29:31.683 "runtime": 1.00825, 00:29:31.683 "iops": 20005.950905033475, 00:29:31.683 "mibps": 78.14824572278701, 00:29:31.683 "io_failed": 0, 00:29:31.683 "io_timeout": 0, 00:29:31.683 "avg_latency_us": 6381.70242823856, 00:29:31.683 "min_latency_us": 3904.8533333333335, 00:29:31.683 "max_latency_us": 13325.653333333334 00:29:31.683 } 00:29:31.683 ], 00:29:31.683 "core_count": 1 00:29:31.683 } 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3257567 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3257567 ']' 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3257567 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3257567 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3257567' 00:29:31.683 killing process with pid 3257567 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3257567 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3257567 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:31.683 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:31.683 [2024-11-05 16:53:36.008627] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:29:31.683 [2024-11-05 16:53:36.008686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257567 ] 00:29:31.683 [2024-11-05 16:53:36.079565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.683 [2024-11-05 16:53:36.115814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.683 [2024-11-05 16:53:37.306501] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 48ecfd4a-b3fb-4818-b0be-95a6f60a70fb already exists 00:29:31.683 [2024-11-05 16:53:37.306531] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:48ecfd4a-b3fb-4818-b0be-95a6f60a70fb alias for bdev NVMe1n1 00:29:31.683 [2024-11-05 16:53:37.306541] bdev_nvme.c:4656:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:31.683 Running I/O for 1 seconds... 00:29:31.683 19978.00 IOPS, 78.04 MiB/s 00:29:31.683 Latency(us) 00:29:31.683 [2024-11-05T15:53:38.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.683 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:31.683 NVMe0n1 : 1.01 20005.95 78.15 0.00 0.00 6381.70 3904.85 13325.65 00:29:31.683 [2024-11-05T15:53:38.746Z] =================================================================================================================== 00:29:31.683 [2024-11-05T15:53:38.746Z] Total : 20005.95 78.15 0.00 0.00 6381.70 3904.85 13325.65 00:29:31.683 Received shutdown signal, test time was about 1.000000 seconds 00:29:31.683 00:29:31.683 Latency(us) 00:29:31.683 [2024-11-05T15:53:38.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.683 [2024-11-05T15:53:38.746Z] =================================================================================================================== 00:29:31.683 [2024-11-05T15:53:38.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.683 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:31.683 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:31.683 rmmod nvme_tcp 00:29:31.683 rmmod nvme_fabrics 00:29:31.945 rmmod nvme_keyring 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 3257257 ']' 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 3257257 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3257257 ']' 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3257257 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3257257 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3257257' 00:29:31.945 killing process with pid 3257257 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3257257 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3257257 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@254 -- # local dev 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:31.945 16:53:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # return 0 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@274 -- # iptr 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-save 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-restore 00:29:34.494 00:29:34.494 real 0m14.028s 00:29:34.494 user 0m16.930s 00:29:34.494 sys 0m6.470s 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.494 ************************************ 00:29:34.494 END TEST nvmf_multicontroller 00:29:34.494 ************************************ 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.494 ************************************ 00:29:34.494 START TEST nvmf_aer 00:29:34.494 ************************************ 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:34.494 * Looking for test storage... 00:29:34.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.494 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.494 --rc genhtml_branch_coverage=1 00:29:34.494 --rc genhtml_function_coverage=1 00:29:34.494 --rc genhtml_legend=1 00:29:34.494 --rc geninfo_all_blocks=1 00:29:34.495 --rc geninfo_unexecuted_blocks=1 00:29:34.495 00:29:34.495 ' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.495 --rc genhtml_branch_coverage=1 00:29:34.495 --rc genhtml_function_coverage=1 00:29:34.495 --rc genhtml_legend=1 00:29:34.495 --rc geninfo_all_blocks=1 00:29:34.495 --rc geninfo_unexecuted_blocks=1 00:29:34.495 00:29:34.495 ' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.495 --rc genhtml_branch_coverage=1 00:29:34.495 --rc genhtml_function_coverage=1 00:29:34.495 --rc genhtml_legend=1 00:29:34.495 --rc geninfo_all_blocks=1 00:29:34.495 --rc geninfo_unexecuted_blocks=1 00:29:34.495 00:29:34.495 ' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.495 --rc genhtml_branch_coverage=1 00:29:34.495 --rc genhtml_function_coverage=1 00:29:34.495 --rc genhtml_legend=1 00:29:34.495 --rc geninfo_all_blocks=1 00:29:34.495 --rc geninfo_unexecuted_blocks=1 00:29:34.495 00:29:34.495 ' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:34.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:29:34.495 16:53:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:42.647 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:42.647 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:42.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:42.647 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:42.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@247 -- # create_target_ns 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:42.648 10.0.0.1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:42.648 10.0.0.2 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:42.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.549 ms 00:29:42.648 00:29:42.648 --- 10.0.0.1 ping statistics --- 00:29:42.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.648 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:42.648 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:42.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:29:42.649 00:29:42.649 --- 10.0.0.2 ping statistics --- 00:29:42.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.649 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:42.649 ' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=3262272 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 3262272 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3262272 ']' 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:42.649 16:53:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.649 [2024-11-05 16:53:48.880429] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:29:42.649 [2024-11-05 16:53:48.880485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.649 [2024-11-05 16:53:48.956681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.649 [2024-11-05 16:53:48.992911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.649 [2024-11-05 16:53:48.992944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.649 [2024-11-05 16:53:48.992952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.649 [2024-11-05 16:53:48.992959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.649 [2024-11-05 16:53:48.992965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.649 [2024-11-05 16:53:48.994466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.649 [2024-11-05 16:53:48.994579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.650 [2024-11-05 16:53:48.994735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.650 [2024-11-05 16:53:48.994736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.650 [2024-11-05 16:53:49.696419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.650 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.911 Malloc0 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.911 [2024-11-05 16:53:49.748941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.911 [ 00:29:42.911 { 00:29:42.911 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.911 "subtype": "Discovery", 00:29:42.911 "listen_addresses": [], 00:29:42.911 "allow_any_host": true, 00:29:42.911 "hosts": [] 00:29:42.911 }, 00:29:42.911 { 00:29:42.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.911 "subtype": "NVMe", 00:29:42.911 "listen_addresses": [ 00:29:42.911 { 00:29:42.911 "trtype": "TCP", 00:29:42.911 "adrfam": "IPv4", 00:29:42.911 "traddr": "10.0.0.2", 00:29:42.911 "trsvcid": "4420" 00:29:42.911 } 00:29:42.911 ], 00:29:42.911 "allow_any_host": true, 00:29:42.911 "hosts": [], 00:29:42.911 "serial_number": "SPDK00000000000001", 00:29:42.911 "model_number": "SPDK bdev Controller", 00:29:42.911 "max_namespaces": 2, 00:29:42.911 "min_cntlid": 1, 00:29:42.911 "max_cntlid": 65519, 00:29:42.911 "namespaces": [ 00:29:42.911 { 00:29:42.911 "nsid": 1, 00:29:42.911 "bdev_name": "Malloc0", 00:29:42.911 "name": "Malloc0", 00:29:42.911 "nguid": "2C5D0D8C9066434C9CE9767DB204E450", 00:29:42.911 "uuid": "2c5d0d8c-9066-434c-9ce9-767db204e450" 00:29:42.911 } 00:29:42.911 ] 00:29:42.911 } 00:29:42.911 ] 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3262601 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:29:42.911 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:29:42.912 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.912 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:29:42.912 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:29:42.912 16:53:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.173 Malloc1 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.173 Asynchronous Event Request test 00:29:43.173 Attaching to 10.0.0.2 00:29:43.173 Attached to 10.0.0.2 00:29:43.173 Registering asynchronous event callbacks... 00:29:43.173 Starting namespace attribute notice tests for all controllers... 00:29:43.173 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:43.173 aer_cb - Changed Namespace 00:29:43.173 Cleaning up... 00:29:43.173 [ 00:29:43.173 { 00:29:43.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:43.173 "subtype": "Discovery", 00:29:43.173 "listen_addresses": [], 00:29:43.173 "allow_any_host": true, 00:29:43.173 "hosts": [] 00:29:43.173 }, 00:29:43.173 { 00:29:43.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.173 "subtype": "NVMe", 00:29:43.173 "listen_addresses": [ 00:29:43.173 { 00:29:43.173 "trtype": "TCP", 00:29:43.173 "adrfam": "IPv4", 00:29:43.173 "traddr": "10.0.0.2", 00:29:43.173 "trsvcid": "4420" 00:29:43.173 } 00:29:43.173 ], 00:29:43.173 "allow_any_host": true, 00:29:43.173 "hosts": [], 00:29:43.173 "serial_number": "SPDK00000000000001", 00:29:43.173 "model_number": "SPDK bdev Controller", 00:29:43.173 "max_namespaces": 2, 00:29:43.173 "min_cntlid": 1, 00:29:43.173 "max_cntlid": 65519, 00:29:43.173 "namespaces": [ 00:29:43.173 { 00:29:43.173 "nsid": 1, 00:29:43.173 "bdev_name": "Malloc0", 00:29:43.173 "name": "Malloc0", 00:29:43.173 "nguid": "2C5D0D8C9066434C9CE9767DB204E450", 00:29:43.173 "uuid": "2c5d0d8c-9066-434c-9ce9-767db204e450" 00:29:43.173 }, 00:29:43.173 { 00:29:43.173 "nsid": 2, 00:29:43.173 "bdev_name": "Malloc1", 00:29:43.173 "name": "Malloc1", 00:29:43.173 "nguid": "FFA030E9F0834257BDAE1EC204F8BA1F", 00:29:43.173 "uuid": "ffa030e9-f083-4257-bdae-1ec204f8ba1f" 00:29:43.173 } 00:29:43.173 ] 00:29:43.173 } 00:29:43.173 ] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3262601 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:43.173 rmmod nvme_tcp 00:29:43.173 rmmod nvme_fabrics 00:29:43.173 rmmod nvme_keyring 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 3262272 ']' 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 3262272 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3262272 ']' 00:29:43.173 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3262272 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3262272 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3262272' 00:29:43.463 killing process with pid 3262272 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3262272 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3262272 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@254 -- # local dev 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:43.463 16:53:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # return 0 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@274 -- # iptr 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-save 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-restore 00:29:45.454 00:29:45.454 real 0m11.347s 00:29:45.454 user 0m7.921s 00:29:45.454 sys 0m6.024s 00:29:45.454 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:45.455 16:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:45.455 ************************************ 00:29:45.455 END TEST nvmf_aer 00:29:45.455 ************************************ 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.717 ************************************ 00:29:45.717 START TEST nvmf_async_init 00:29:45.717 ************************************ 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:45.717 * Looking for test storage... 00:29:45.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.717 --rc genhtml_branch_coverage=1 00:29:45.717 --rc genhtml_function_coverage=1 00:29:45.717 --rc genhtml_legend=1 00:29:45.717 --rc geninfo_all_blocks=1 00:29:45.717 --rc geninfo_unexecuted_blocks=1 00:29:45.717 00:29:45.717 ' 00:29:45.717 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.717 --rc genhtml_branch_coverage=1 00:29:45.717 --rc genhtml_function_coverage=1 00:29:45.717 --rc genhtml_legend=1 00:29:45.717 --rc geninfo_all_blocks=1 00:29:45.717 --rc geninfo_unexecuted_blocks=1 00:29:45.717 00:29:45.717 ' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.980 --rc genhtml_branch_coverage=1 00:29:45.980 --rc genhtml_function_coverage=1 00:29:45.980 --rc genhtml_legend=1 00:29:45.980 --rc geninfo_all_blocks=1 00:29:45.980 --rc geninfo_unexecuted_blocks=1 00:29:45.980 00:29:45.980 ' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.980 --rc genhtml_branch_coverage=1 00:29:45.980 --rc genhtml_function_coverage=1 00:29:45.980 --rc genhtml_legend=1 00:29:45.980 --rc geninfo_all_blocks=1 00:29:45.980 --rc geninfo_unexecuted_blocks=1 00:29:45.980 00:29:45.980 ' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:45.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d355c7a3d7ea4a5e9172459d1bf664ac 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:45.980 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:29:45.981 16:53:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:54.126 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:54.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:54.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:54.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:54.127 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@247 -- # create_target_ns 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:54.127 16:53:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:54.127 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:54.127 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:54.127 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:54.128 10.0.0.1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:54.128 10.0.0.2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:54.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.691 ms 00:29:54.128 00:29:54.128 --- 10.0.0.1 ping statistics --- 00:29:54.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.128 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:54.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:29:54.128 00:29:54.128 --- 10.0.0.2 ping statistics --- 00:29:54.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.128 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:54.128 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:54.129 ' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=3266985 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 3266985 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3266985 ']' 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:54.129 16:54:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.129 [2024-11-05 16:54:00.541453] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:29:54.129 [2024-11-05 16:54:00.541527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.129 [2024-11-05 16:54:00.623877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.129 [2024-11-05 16:54:00.665427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.129 [2024-11-05 16:54:00.665467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.129 [2024-11-05 16:54:00.665475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.129 [2024-11-05 16:54:00.665482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.129 [2024-11-05 16:54:00.665488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.129 [2024-11-05 16:54:00.666095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 [2024-11-05 16:54:01.392010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 null0 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d355c7a3d7ea4a5e9172459d1bf664ac 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.390 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.390 [2024-11-05 16:54:01.452293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.651 nvme0n1 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.651 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.651 [ 00:29:54.651 { 00:29:54.651 "name": "nvme0n1", 00:29:54.651 "aliases": [ 00:29:54.651 "d355c7a3-d7ea-4a5e-9172-459d1bf664ac" 00:29:54.651 ], 00:29:54.651 "product_name": "NVMe disk", 00:29:54.651 "block_size": 512, 00:29:54.651 "num_blocks": 2097152, 00:29:54.651 "uuid": "d355c7a3-d7ea-4a5e-9172-459d1bf664ac", 00:29:54.651 "numa_id": 0, 00:29:54.651 "assigned_rate_limits": { 00:29:54.651 "rw_ios_per_sec": 0, 00:29:54.651 "rw_mbytes_per_sec": 0, 00:29:54.651 "r_mbytes_per_sec": 0, 00:29:54.651 "w_mbytes_per_sec": 0 00:29:54.651 }, 00:29:54.651 "claimed": false, 00:29:54.651 "zoned": false, 00:29:54.651 "supported_io_types": { 00:29:54.651 "read": true, 00:29:54.651 "write": true, 00:29:54.651 "unmap": false, 00:29:54.651 "flush": true, 00:29:54.651 "reset": true, 00:29:54.651 "nvme_admin": true, 00:29:54.651 "nvme_io": true, 00:29:54.651 "nvme_io_md": false, 00:29:54.651 "write_zeroes": true, 00:29:54.651 "zcopy": false, 00:29:54.651 "get_zone_info": false, 00:29:54.651 "zone_management": false, 00:29:54.651 "zone_append": false, 00:29:54.651 "compare": true, 00:29:54.651 "compare_and_write": true, 00:29:54.651 "abort": true, 00:29:54.651 "seek_hole": false, 00:29:54.651 "seek_data": false, 00:29:54.651 "copy": true, 00:29:54.651 "nvme_iov_md": false 00:29:54.651 }, 00:29:54.651 "memory_domains": [ 00:29:54.651 { 00:29:54.651 "dma_device_id": "system", 00:29:54.651 "dma_device_type": 1 00:29:54.651 } 00:29:54.651 ], 00:29:54.651 "driver_specific": { 00:29:54.651 "nvme": [ 00:29:54.651 { 00:29:54.651 "trid": { 00:29:54.651 "trtype": "TCP", 00:29:54.651 "adrfam": "IPv4", 00:29:54.651 "traddr": "10.0.0.2", 00:29:54.651 "trsvcid": "4420", 00:29:54.651 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:54.651 }, 00:29:54.651 "ctrlr_data": { 00:29:54.913 "cntlid": 1, 00:29:54.913 "vendor_id": "0x8086", 00:29:54.913 "model_number": "SPDK bdev Controller", 00:29:54.913 "serial_number": "00000000000000000000", 00:29:54.913 "firmware_revision": "25.01", 00:29:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.913 "oacs": { 00:29:54.913 "security": 0, 00:29:54.913 "format": 0, 00:29:54.913 "firmware": 0, 00:29:54.913 "ns_manage": 0 00:29:54.913 }, 00:29:54.913 "multi_ctrlr": true, 00:29:54.913 "ana_reporting": false 00:29:54.913 }, 00:29:54.913 "vs": { 00:29:54.913 "nvme_version": "1.3" 00:29:54.913 }, 00:29:54.913 "ns_data": { 00:29:54.913 "id": 1, 00:29:54.913 "can_share": true 00:29:54.913 } 00:29:54.913 } 00:29:54.913 ], 00:29:54.913 "mp_policy": "active_passive" 00:29:54.913 } 00:29:54.913 } 00:29:54.913 ] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 [2024-11-05 16:54:01.726534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.913 [2024-11-05 16:54:01.726597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdedf60 (9): Bad file descriptor 00:29:54.913 [2024-11-05 16:54:01.859852] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 [ 00:29:54.913 { 00:29:54.913 "name": "nvme0n1", 00:29:54.913 "aliases": [ 00:29:54.913 "d355c7a3-d7ea-4a5e-9172-459d1bf664ac" 00:29:54.913 ], 00:29:54.913 "product_name": "NVMe disk", 00:29:54.913 "block_size": 512, 00:29:54.913 "num_blocks": 2097152, 00:29:54.913 "uuid": "d355c7a3-d7ea-4a5e-9172-459d1bf664ac", 00:29:54.913 "numa_id": 0, 00:29:54.913 "assigned_rate_limits": { 00:29:54.913 "rw_ios_per_sec": 0, 00:29:54.913 "rw_mbytes_per_sec": 0, 00:29:54.913 "r_mbytes_per_sec": 0, 00:29:54.913 "w_mbytes_per_sec": 0 00:29:54.913 }, 00:29:54.913 "claimed": false, 00:29:54.913 "zoned": false, 00:29:54.913 "supported_io_types": { 00:29:54.913 "read": true, 00:29:54.913 "write": true, 00:29:54.913 "unmap": false, 00:29:54.913 "flush": true, 00:29:54.913 "reset": true, 00:29:54.913 "nvme_admin": true, 00:29:54.913 "nvme_io": true, 00:29:54.913 "nvme_io_md": false, 00:29:54.913 "write_zeroes": true, 00:29:54.913 "zcopy": false, 00:29:54.913 "get_zone_info": false, 00:29:54.913 "zone_management": false, 00:29:54.913 "zone_append": false, 00:29:54.913 "compare": true, 00:29:54.913 "compare_and_write": true, 00:29:54.913 "abort": true, 00:29:54.913 "seek_hole": false, 00:29:54.913 "seek_data": false, 00:29:54.913 "copy": true, 00:29:54.913 "nvme_iov_md": false 00:29:54.913 }, 00:29:54.913 "memory_domains": [ 00:29:54.913 { 00:29:54.913 "dma_device_id": "system", 00:29:54.913 "dma_device_type": 1 00:29:54.913 } 00:29:54.913 ], 00:29:54.913 "driver_specific": { 00:29:54.913 "nvme": [ 00:29:54.913 { 00:29:54.913 "trid": { 00:29:54.913 "trtype": "TCP", 00:29:54.913 "adrfam": "IPv4", 00:29:54.913 "traddr": "10.0.0.2", 00:29:54.913 "trsvcid": "4420", 00:29:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:54.913 }, 00:29:54.913 "ctrlr_data": { 00:29:54.913 "cntlid": 2, 00:29:54.913 "vendor_id": "0x8086", 00:29:54.913 "model_number": "SPDK bdev Controller", 00:29:54.913 "serial_number": "00000000000000000000", 00:29:54.913 "firmware_revision": "25.01", 00:29:54.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.913 "oacs": { 00:29:54.913 "security": 0, 00:29:54.913 "format": 0, 00:29:54.913 "firmware": 0, 00:29:54.913 "ns_manage": 0 00:29:54.913 }, 00:29:54.913 "multi_ctrlr": true, 00:29:54.913 "ana_reporting": false 00:29:54.913 }, 00:29:54.913 "vs": { 00:29:54.913 "nvme_version": "1.3" 00:29:54.913 }, 00:29:54.913 "ns_data": { 00:29:54.913 "id": 1, 00:29:54.913 "can_share": true 00:29:54.913 } 00:29:54.913 } 00:29:54.913 ], 00:29:54.913 "mp_policy": "active_passive" 00:29:54.913 } 00:29:54.913 } 00:29:54.913 ] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ogxg9bw3SQ 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ogxg9bw3SQ 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ogxg9bw3SQ 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 [2024-11-05 16:54:01.951234] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:54.913 [2024-11-05 16:54:01.951363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.913 16:54:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.913 [2024-11-05 16:54:01.975312] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:55.174 nvme0n1 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.174 [ 00:29:55.174 { 00:29:55.174 "name": "nvme0n1", 00:29:55.174 "aliases": [ 00:29:55.174 "d355c7a3-d7ea-4a5e-9172-459d1bf664ac" 00:29:55.174 ], 00:29:55.174 "product_name": "NVMe disk", 00:29:55.174 "block_size": 512, 00:29:55.174 "num_blocks": 2097152, 00:29:55.174 "uuid": "d355c7a3-d7ea-4a5e-9172-459d1bf664ac", 00:29:55.174 "numa_id": 0, 00:29:55.174 "assigned_rate_limits": { 00:29:55.174 "rw_ios_per_sec": 0, 00:29:55.174 "rw_mbytes_per_sec": 0, 00:29:55.174 "r_mbytes_per_sec": 0, 00:29:55.174 "w_mbytes_per_sec": 0 00:29:55.174 }, 00:29:55.174 "claimed": false, 00:29:55.174 "zoned": false, 00:29:55.174 "supported_io_types": { 00:29:55.174 "read": true, 00:29:55.174 "write": true, 00:29:55.174 "unmap": false, 00:29:55.174 "flush": true, 00:29:55.174 "reset": true, 00:29:55.174 "nvme_admin": true, 00:29:55.174 "nvme_io": true, 00:29:55.174 "nvme_io_md": false, 00:29:55.174 "write_zeroes": true, 00:29:55.174 "zcopy": false, 00:29:55.174 "get_zone_info": false, 00:29:55.174 "zone_management": false, 00:29:55.174 "zone_append": false, 00:29:55.174 "compare": true, 00:29:55.174 "compare_and_write": true, 00:29:55.174 "abort": true, 00:29:55.174 "seek_hole": false, 00:29:55.174 "seek_data": false, 00:29:55.174 "copy": true, 00:29:55.174 "nvme_iov_md": false 00:29:55.174 }, 00:29:55.174 "memory_domains": [ 00:29:55.174 { 00:29:55.174 "dma_device_id": "system", 00:29:55.174 "dma_device_type": 1 00:29:55.174 } 00:29:55.174 ], 00:29:55.174 "driver_specific": { 00:29:55.174 "nvme": [ 00:29:55.174 { 00:29:55.174 "trid": { 00:29:55.174 "trtype": "TCP", 00:29:55.174 "adrfam": "IPv4", 00:29:55.174 "traddr": "10.0.0.2", 00:29:55.174 "trsvcid": "4421", 00:29:55.174 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.174 }, 00:29:55.174 "ctrlr_data": { 00:29:55.174 "cntlid": 3, 00:29:55.174 "vendor_id": "0x8086", 00:29:55.174 "model_number": "SPDK bdev Controller", 00:29:55.174 "serial_number": "00000000000000000000", 00:29:55.174 "firmware_revision": "25.01", 00:29:55.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.174 "oacs": { 00:29:55.174 "security": 0, 00:29:55.174 "format": 0, 00:29:55.174 "firmware": 0, 00:29:55.174 "ns_manage": 0 00:29:55.174 }, 00:29:55.174 "multi_ctrlr": true, 00:29:55.174 "ana_reporting": false 00:29:55.174 }, 00:29:55.174 "vs": { 00:29:55.174 "nvme_version": "1.3" 00:29:55.174 }, 00:29:55.174 "ns_data": { 00:29:55.174 "id": 1, 00:29:55.174 "can_share": true 00:29:55.174 } 00:29:55.174 } 00:29:55.174 ], 00:29:55.174 "mp_policy": "active_passive" 00:29:55.174 } 00:29:55.174 } 00:29:55.174 ] 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ogxg9bw3SQ 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:55.174 rmmod nvme_tcp 00:29:55.174 rmmod nvme_fabrics 00:29:55.174 rmmod nvme_keyring 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 3266985 ']' 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 3266985 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3266985 ']' 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3266985 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:55.174 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3266985 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3266985' 00:29:55.436 killing process with pid 3266985 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3266985 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3266985 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@254 -- # local dev 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:55.436 16:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # return 0 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@274 -- # iptr 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-save 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-restore 00:29:57.986 00:29:57.986 real 0m11.874s 00:29:57.986 user 0m4.279s 00:29:57.986 sys 0m6.136s 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.986 ************************************ 00:29:57.986 END TEST nvmf_async_init 00:29:57.986 ************************************ 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.986 ************************************ 00:29:57.986 START TEST dma 00:29:57.986 ************************************ 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:57.986 * Looking for test storage... 00:29:57.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.986 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:57.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.986 --rc genhtml_branch_coverage=1 00:29:57.986 --rc genhtml_function_coverage=1 00:29:57.986 --rc genhtml_legend=1 00:29:57.986 --rc geninfo_all_blocks=1 00:29:57.987 --rc geninfo_unexecuted_blocks=1 00:29:57.987 00:29:57.987 ' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:57.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.987 --rc genhtml_branch_coverage=1 00:29:57.987 --rc genhtml_function_coverage=1 00:29:57.987 --rc genhtml_legend=1 00:29:57.987 --rc geninfo_all_blocks=1 00:29:57.987 --rc geninfo_unexecuted_blocks=1 00:29:57.987 00:29:57.987 ' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:57.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.987 --rc genhtml_branch_coverage=1 00:29:57.987 --rc genhtml_function_coverage=1 00:29:57.987 --rc genhtml_legend=1 00:29:57.987 --rc geninfo_all_blocks=1 00:29:57.987 --rc geninfo_unexecuted_blocks=1 00:29:57.987 00:29:57.987 ' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:57.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.987 --rc genhtml_branch_coverage=1 00:29:57.987 --rc genhtml_function_coverage=1 00:29:57.987 --rc genhtml_legend=1 00:29:57.987 --rc geninfo_all_blocks=1 00:29:57.987 --rc geninfo_unexecuted_blocks=1 00:29:57.987 00:29:57.987 ' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@50 -- # : 0 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:57.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:57.987 00:29:57.987 real 0m0.235s 00:29:57.987 user 0m0.128s 00:29:57.987 sys 0m0.123s 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:57.987 ************************************ 00:29:57.987 END TEST dma 00:29:57.987 ************************************ 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.987 ************************************ 00:29:57.987 START TEST nvmf_identify 00:29:57.987 ************************************ 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:57.987 * Looking for test storage... 00:29:57.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:29:57.987 16:54:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:57.987 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:57.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.988 --rc genhtml_branch_coverage=1 00:29:57.988 --rc genhtml_function_coverage=1 00:29:57.988 --rc genhtml_legend=1 00:29:57.988 --rc geninfo_all_blocks=1 00:29:57.988 --rc geninfo_unexecuted_blocks=1 00:29:57.988 00:29:57.988 ' 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:57.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.988 --rc genhtml_branch_coverage=1 00:29:57.988 --rc genhtml_function_coverage=1 00:29:57.988 --rc genhtml_legend=1 00:29:57.988 --rc geninfo_all_blocks=1 00:29:57.988 --rc geninfo_unexecuted_blocks=1 00:29:57.988 00:29:57.988 ' 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:57.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.988 --rc genhtml_branch_coverage=1 00:29:57.988 --rc genhtml_function_coverage=1 00:29:57.988 --rc genhtml_legend=1 00:29:57.988 --rc geninfo_all_blocks=1 00:29:57.988 --rc geninfo_unexecuted_blocks=1 00:29:57.988 00:29:57.988 ' 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:57.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.988 --rc genhtml_branch_coverage=1 00:29:57.988 --rc genhtml_function_coverage=1 00:29:57.988 --rc genhtml_legend=1 00:29:57.988 --rc geninfo_all_blocks=1 00:29:57.988 --rc geninfo_unexecuted_blocks=1 00:29:57.988 00:29:57.988 ' 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.988 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:58.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:29:58.251 16:54:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.396 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:06.397 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:06.397 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:06.397 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:06.397 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@247 -- # create_target_ns 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:06.397 10.0.0.1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:06.397 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:06.398 10.0.0.2 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:06.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.587 ms 00:30:06.398 00:30:06.398 --- 10.0.0.1 ping statistics --- 00:30:06.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.398 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:06.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:30:06.398 00:30:06.398 --- 10.0.0.2 ping statistics --- 00:30:06.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.398 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:06.398 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:30:06.399 ' 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3272138 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3272138 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3272138 ']' 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:06.399 16:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.399 [2024-11-05 16:54:12.641427] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:30:06.399 [2024-11-05 16:54:12.641494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.399 [2024-11-05 16:54:12.724599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.399 [2024-11-05 16:54:12.767740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.399 [2024-11-05 16:54:12.767781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.399 [2024-11-05 16:54:12.767789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.399 [2024-11-05 16:54:12.767796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.399 [2024-11-05 16:54:12.767802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.399 [2024-11-05 16:54:12.769631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.399 [2024-11-05 16:54:12.769779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.399 [2024-11-05 16:54:12.769876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.399 [2024-11-05 16:54:12.769876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.399 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:06.399 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:30:06.399 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.399 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.399 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.399 [2024-11-05 16:54:13.452699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 Malloc0 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 [2024-11-05 16:54:13.557945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.662 [ 00:30:06.662 { 00:30:06.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:06.662 "subtype": "Discovery", 00:30:06.662 "listen_addresses": [ 00:30:06.662 { 00:30:06.662 "trtype": "TCP", 00:30:06.662 "adrfam": "IPv4", 00:30:06.662 "traddr": "10.0.0.2", 00:30:06.662 "trsvcid": "4420" 00:30:06.662 } 00:30:06.662 ], 00:30:06.662 "allow_any_host": true, 00:30:06.662 "hosts": [] 00:30:06.662 }, 00:30:06.662 { 00:30:06.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.662 "subtype": "NVMe", 00:30:06.662 "listen_addresses": [ 00:30:06.662 { 00:30:06.662 "trtype": "TCP", 00:30:06.662 "adrfam": "IPv4", 00:30:06.662 "traddr": "10.0.0.2", 00:30:06.662 "trsvcid": "4420" 00:30:06.662 } 00:30:06.662 ], 00:30:06.662 "allow_any_host": true, 00:30:06.662 "hosts": [], 00:30:06.662 "serial_number": "SPDK00000000000001", 00:30:06.662 "model_number": "SPDK bdev Controller", 00:30:06.662 "max_namespaces": 32, 00:30:06.662 "min_cntlid": 1, 00:30:06.662 "max_cntlid": 65519, 00:30:06.662 "namespaces": [ 00:30:06.662 { 00:30:06.662 "nsid": 1, 00:30:06.662 "bdev_name": "Malloc0", 00:30:06.662 "name": "Malloc0", 00:30:06.662 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:06.662 "eui64": "ABCDEF0123456789", 00:30:06.662 "uuid": "bc9e7752-bae6-43a9-a16e-27e56129d8ca" 00:30:06.662 } 00:30:06.662 ] 00:30:06.662 } 00:30:06.662 ] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.662 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:06.662 [2024-11-05 16:54:13.611642] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:30:06.662 [2024-11-05 16:54:13.611692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272333 ] 00:30:06.662 [2024-11-05 16:54:13.665917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:06.662 [2024-11-05 16:54:13.665966] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.662 [2024-11-05 16:54:13.665972] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.662 [2024-11-05 16:54:13.665988] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.662 [2024-11-05 16:54:13.665997] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.662 [2024-11-05 16:54:13.670032] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:06.662 [2024-11-05 16:54:13.670067] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1047690 0 00:30:06.662 [2024-11-05 16:54:13.677756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.662 [2024-11-05 16:54:13.677773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.662 [2024-11-05 16:54:13.677778] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.662 [2024-11-05 16:54:13.677782] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.662 [2024-11-05 16:54:13.677816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.662 [2024-11-05 16:54:13.677822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.662 [2024-11-05 16:54:13.677826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.662 [2024-11-05 16:54:13.677841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.662 [2024-11-05 16:54:13.677860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.662 [2024-11-05 16:54:13.685755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.662 [2024-11-05 16:54:13.685765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.662 [2024-11-05 16:54:13.685768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.662 [2024-11-05 16:54:13.685773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.662 [2024-11-05 16:54:13.685786] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.663 [2024-11-05 16:54:13.685794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:06.663 [2024-11-05 16:54:13.685800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:06.663 [2024-11-05 16:54:13.685815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.685819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.685823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.663 [2024-11-05 16:54:13.685831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.663 [2024-11-05 16:54:13.685844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.663 [2024-11-05 16:54:13.686030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.663 [2024-11-05 16:54:13.686036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.663 [2024-11-05 16:54:13.686040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.663 [2024-11-05 16:54:13.686050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:06.663 [2024-11-05 16:54:13.686057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:06.663 [2024-11-05 16:54:13.686064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.663 [2024-11-05 16:54:13.686078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.663 [2024-11-05 16:54:13.686089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.663 [2024-11-05 16:54:13.686281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.663 [2024-11-05 16:54:13.686288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.663 [2024-11-05 16:54:13.686291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.663 [2024-11-05 16:54:13.686301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:06.663 [2024-11-05 16:54:13.686314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.663 [2024-11-05 16:54:13.686321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.663 [2024-11-05 16:54:13.686335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.663 [2024-11-05 16:54:13.686346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.663 [2024-11-05 16:54:13.686502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.663 [2024-11-05 16:54:13.686509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.663 [2024-11-05 16:54:13.686513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.663 [2024-11-05 16:54:13.686522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.663 [2024-11-05 16:54:13.686531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.663 [2024-11-05 16:54:13.686546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.663 [2024-11-05 16:54:13.686556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.663 [2024-11-05 16:54:13.686718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.663 [2024-11-05 16:54:13.686724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.663 [2024-11-05 16:54:13.686728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.663 [2024-11-05 16:54:13.686737] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.663 [2024-11-05 16:54:13.686742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.663 [2024-11-05 16:54:13.686754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.663 [2024-11-05 16:54:13.686863] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:06.663 [2024-11-05 16:54:13.686868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.663 [2024-11-05 16:54:13.686876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.686885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.663 [2024-11-05 16:54:13.686891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.663 [2024-11-05 16:54:13.686902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.663 [2024-11-05 16:54:13.687062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.663 [2024-11-05 16:54:13.687069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.663 [2024-11-05 16:54:13.687074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.687078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.663 [2024-11-05 16:54:13.687084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.663 [2024-11-05 16:54:13.687093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.687097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.687100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.663 [2024-11-05 16:54:13.687107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.663 [2024-11-05 16:54:13.687117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.663 [2024-11-05 16:54:13.687277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.663 [2024-11-05 16:54:13.687283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.663 [2024-11-05 16:54:13.687287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.663 [2024-11-05 16:54:13.687291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.663 [2024-11-05 16:54:13.687295] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.663 [2024-11-05 16:54:13.687300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.663 [2024-11-05 16:54:13.687307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:06.664 [2024-11-05 16:54:13.687315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.664 [2024-11-05 16:54:13.687324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.687334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-11-05 16:54:13.687345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.664 [2024-11-05 16:54:13.687524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.664 [2024-11-05 16:54:13.687530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.664 [2024-11-05 16:54:13.687534] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687538] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1047690): datao=0, datal=4096, cccid=0 00:30:06.664 [2024-11-05 16:54:13.687543] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a9100) on tqpair(0x1047690): expected_datao=0, payload_size=4096 00:30:06.664 [2024-11-05 16:54:13.687548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687598] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687603] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.664 [2024-11-05 16:54:13.687734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.664 [2024-11-05 16:54:13.687737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.664 [2024-11-05 16:54:13.687756] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:06.664 [2024-11-05 16:54:13.687764] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:06.664 [2024-11-05 16:54:13.687769] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:06.664 [2024-11-05 16:54:13.687774] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:06.664 [2024-11-05 16:54:13.687781] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:06.664 [2024-11-05 16:54:13.687786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.664 [2024-11-05 16:54:13.687795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.664 [2024-11-05 16:54:13.687802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.687816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.664 [2024-11-05 16:54:13.687828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.664 [2024-11-05 16:54:13.687985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.664 [2024-11-05 16:54:13.687991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.664 [2024-11-05 16:54:13.687995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.687999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.664 [2024-11-05 16:54:13.688009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.688023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.664 [2024-11-05 16:54:13.688029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.688042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.664 [2024-11-05 16:54:13.688048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.688061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.664 [2024-11-05 16:54:13.688067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.688080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.664 [2024-11-05 16:54:13.688084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.664 [2024-11-05 16:54:13.688092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.664 [2024-11-05 16:54:13.688100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.688111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-11-05 16:54:13.688123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9100, cid 0, qid 0 00:30:06.664 [2024-11-05 16:54:13.688128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9280, cid 1, qid 0 00:30:06.664 [2024-11-05 16:54:13.688133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9400, cid 2, qid 0 00:30:06.664 [2024-11-05 16:54:13.688138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.664 [2024-11-05 16:54:13.688143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9700, cid 4, qid 0 00:30:06.664 [2024-11-05 16:54:13.688358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.664 [2024-11-05 16:54:13.688365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.664 [2024-11-05 16:54:13.688368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9700) on tqpair=0x1047690 00:30:06.664 [2024-11-05 16:54:13.688380] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:06.664 [2024-11-05 16:54:13.688385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:06.664 [2024-11-05 16:54:13.688395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1047690) 00:30:06.664 [2024-11-05 16:54:13.688406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-11-05 16:54:13.688416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9700, cid 4, qid 0 00:30:06.664 [2024-11-05 16:54:13.688625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.664 [2024-11-05 16:54:13.688631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.664 [2024-11-05 16:54:13.688634] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.664 [2024-11-05 16:54:13.688638] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1047690): datao=0, datal=4096, cccid=4 00:30:06.664 [2024-11-05 16:54:13.688642] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a9700) on tqpair(0x1047690): expected_datao=0, payload_size=4096 00:30:06.664 [2024-11-05 16:54:13.688647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.665 [2024-11-05 16:54:13.688666] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.665 [2024-11-05 16:54:13.688669] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.733754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.929 [2024-11-05 16:54:13.733765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.929 [2024-11-05 16:54:13.733769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.733773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9700) on tqpair=0x1047690 00:30:06.929 [2024-11-05 16:54:13.733786] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:06.929 [2024-11-05 16:54:13.733811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.733815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1047690) 00:30:06.929 [2024-11-05 16:54:13.733823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.929 [2024-11-05 16:54:13.733833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.733837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.733840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1047690) 00:30:06.929 [2024-11-05 16:54:13.733847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.929 [2024-11-05 16:54:13.733862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9700, cid 4, qid 0 00:30:06.929 [2024-11-05 16:54:13.733867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9880, cid 5, qid 0 00:30:06.929 [2024-11-05 16:54:13.734094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.929 [2024-11-05 16:54:13.734100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.929 [2024-11-05 16:54:13.734104] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.734107] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1047690): datao=0, datal=1024, cccid=4 00:30:06.929 [2024-11-05 16:54:13.734112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a9700) on tqpair(0x1047690): expected_datao=0, payload_size=1024 00:30:06.929 [2024-11-05 16:54:13.734116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.734123] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.734127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.734133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.929 [2024-11-05 16:54:13.734138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.929 [2024-11-05 16:54:13.734142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.734146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9880) on tqpair=0x1047690 00:30:06.929 [2024-11-05 16:54:13.774909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.929 [2024-11-05 16:54:13.774918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.929 [2024-11-05 16:54:13.774922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.774926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9700) on tqpair=0x1047690 00:30:06.929 [2024-11-05 16:54:13.774937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.774942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1047690) 00:30:06.929 [2024-11-05 16:54:13.774949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.929 [2024-11-05 16:54:13.774964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9700, cid 4, qid 0 00:30:06.929 [2024-11-05 16:54:13.775324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.929 [2024-11-05 16:54:13.775330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.929 [2024-11-05 16:54:13.775334] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775337] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1047690): datao=0, datal=3072, cccid=4 00:30:06.929 [2024-11-05 16:54:13.775342] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a9700) on tqpair(0x1047690): expected_datao=0, payload_size=3072 00:30:06.929 [2024-11-05 16:54:13.775346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775363] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775367] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.929 [2024-11-05 16:54:13.775505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.929 [2024-11-05 16:54:13.775511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9700) on tqpair=0x1047690 00:30:06.929 [2024-11-05 16:54:13.775524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1047690) 00:30:06.929 [2024-11-05 16:54:13.775534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.929 [2024-11-05 16:54:13.775548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9700, cid 4, qid 0 00:30:06.929 [2024-11-05 16:54:13.775755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.929 [2024-11-05 16:54:13.775762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.929 [2024-11-05 16:54:13.775765] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1047690): datao=0, datal=8, cccid=4 00:30:06.929 [2024-11-05 16:54:13.775774] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a9700) on tqpair(0x1047690): expected_datao=0, payload_size=8 00:30:06.929 [2024-11-05 16:54:13.775778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775784] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.775788] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.820753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.929 [2024-11-05 16:54:13.820762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.929 [2024-11-05 16:54:13.820766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.929 [2024-11-05 16:54:13.820770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9700) on tqpair=0x1047690 00:30:06.929 ===================================================== 00:30:06.929 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:06.929 ===================================================== 00:30:06.929 Controller Capabilities/Features 00:30:06.929 ================================ 00:30:06.929 Vendor ID: 0000 00:30:06.929 Subsystem Vendor ID: 0000 00:30:06.929 Serial Number: .................... 00:30:06.929 Model Number: ........................................ 00:30:06.929 Firmware Version: 25.01 00:30:06.929 Recommended Arb Burst: 0 00:30:06.929 IEEE OUI Identifier: 00 00 00 00:30:06.929 Multi-path I/O 00:30:06.929 May have multiple subsystem ports: No 00:30:06.929 May have multiple controllers: No 00:30:06.929 Associated with SR-IOV VF: No 00:30:06.929 Max Data Transfer Size: 131072 00:30:06.929 Max Number of Namespaces: 0 00:30:06.929 Max Number of I/O Queues: 1024 00:30:06.929 NVMe Specification Version (VS): 1.3 00:30:06.929 NVMe Specification Version (Identify): 1.3 00:30:06.929 Maximum Queue Entries: 128 00:30:06.929 Contiguous Queues Required: Yes 00:30:06.929 Arbitration Mechanisms Supported 00:30:06.929 Weighted Round Robin: Not Supported 00:30:06.929 Vendor Specific: Not Supported 00:30:06.929 Reset Timeout: 15000 ms 00:30:06.929 Doorbell Stride: 4 bytes 00:30:06.929 NVM Subsystem Reset: Not Supported 00:30:06.929 Command Sets Supported 00:30:06.929 NVM Command Set: Supported 00:30:06.929 Boot Partition: Not Supported 00:30:06.929 Memory Page Size Minimum: 4096 bytes 00:30:06.929 Memory Page Size Maximum: 4096 bytes 00:30:06.929 Persistent Memory Region: Not Supported 00:30:06.929 Optional Asynchronous Events Supported 00:30:06.929 Namespace Attribute Notices: Not Supported 00:30:06.929 Firmware Activation Notices: Not Supported 00:30:06.929 ANA Change Notices: Not Supported 00:30:06.929 PLE Aggregate Log Change Notices: Not Supported 00:30:06.929 LBA Status Info Alert Notices: Not Supported 00:30:06.929 EGE Aggregate Log Change Notices: Not Supported 00:30:06.929 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.930 Zone Descriptor Change Notices: Not Supported 00:30:06.930 Discovery Log Change Notices: Supported 00:30:06.930 Controller Attributes 00:30:06.930 128-bit Host Identifier: Not Supported 00:30:06.930 Non-Operational Permissive Mode: Not Supported 00:30:06.930 NVM Sets: Not Supported 00:30:06.930 Read Recovery Levels: Not Supported 00:30:06.930 Endurance Groups: Not Supported 00:30:06.930 Predictable Latency Mode: Not Supported 00:30:06.930 Traffic Based Keep ALive: Not Supported 00:30:06.930 Namespace Granularity: Not Supported 00:30:06.930 SQ Associations: Not Supported 00:30:06.930 UUID List: Not Supported 00:30:06.930 Multi-Domain Subsystem: Not Supported 00:30:06.930 Fixed Capacity Management: Not Supported 00:30:06.930 Variable Capacity Management: Not Supported 00:30:06.930 Delete Endurance Group: Not Supported 00:30:06.930 Delete NVM Set: Not Supported 00:30:06.930 Extended LBA Formats Supported: Not Supported 00:30:06.930 Flexible Data Placement Supported: Not Supported 00:30:06.930 00:30:06.930 Controller Memory Buffer Support 00:30:06.930 ================================ 00:30:06.930 Supported: No 00:30:06.930 00:30:06.930 Persistent Memory Region Support 00:30:06.930 ================================ 00:30:06.930 Supported: No 00:30:06.930 00:30:06.930 Admin Command Set Attributes 00:30:06.930 ============================ 00:30:06.930 Security Send/Receive: Not Supported 00:30:06.930 Format NVM: Not Supported 00:30:06.930 Firmware Activate/Download: Not Supported 00:30:06.930 Namespace Management: Not Supported 00:30:06.930 Device Self-Test: Not Supported 00:30:06.930 Directives: Not Supported 00:30:06.930 NVMe-MI: Not Supported 00:30:06.930 Virtualization Management: Not Supported 00:30:06.930 Doorbell Buffer Config: Not Supported 00:30:06.930 Get LBA Status Capability: Not Supported 00:30:06.930 Command & Feature Lockdown Capability: Not Supported 00:30:06.930 Abort Command Limit: 1 00:30:06.930 Async Event Request Limit: 4 00:30:06.930 Number of Firmware Slots: N/A 00:30:06.930 Firmware Slot 1 Read-Only: N/A 00:30:06.930 Firmware Activation Without Reset: N/A 00:30:06.930 Multiple Update Detection Support: N/A 00:30:06.930 Firmware Update Granularity: No Information Provided 00:30:06.930 Per-Namespace SMART Log: No 00:30:06.930 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.930 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:06.930 Command Effects Log Page: Not Supported 00:30:06.930 Get Log Page Extended Data: Supported 00:30:06.930 Telemetry Log Pages: Not Supported 00:30:06.930 Persistent Event Log Pages: Not Supported 00:30:06.930 Supported Log Pages Log Page: May Support 00:30:06.930 Commands Supported & Effects Log Page: Not Supported 00:30:06.930 Feature Identifiers & Effects Log Page:May Support 00:30:06.930 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.930 Data Area 4 for Telemetry Log: Not Supported 00:30:06.930 Error Log Page Entries Supported: 128 00:30:06.930 Keep Alive: Not Supported 00:30:06.930 00:30:06.930 NVM Command Set Attributes 00:30:06.930 ========================== 00:30:06.930 Submission Queue Entry Size 00:30:06.930 Max: 1 00:30:06.930 Min: 1 00:30:06.930 Completion Queue Entry Size 00:30:06.930 Max: 1 00:30:06.930 Min: 1 00:30:06.930 Number of Namespaces: 0 00:30:06.930 Compare Command: Not Supported 00:30:06.930 Write Uncorrectable Command: Not Supported 00:30:06.930 Dataset Management Command: Not Supported 00:30:06.930 Write Zeroes Command: Not Supported 00:30:06.930 Set Features Save Field: Not Supported 00:30:06.930 Reservations: Not Supported 00:30:06.930 Timestamp: Not Supported 00:30:06.930 Copy: Not Supported 00:30:06.930 Volatile Write Cache: Not Present 00:30:06.930 Atomic Write Unit (Normal): 1 00:30:06.930 Atomic Write Unit (PFail): 1 00:30:06.930 Atomic Compare & Write Unit: 1 00:30:06.930 Fused Compare & Write: Supported 00:30:06.930 Scatter-Gather List 00:30:06.930 SGL Command Set: Supported 00:30:06.930 SGL Keyed: Supported 00:30:06.930 SGL Bit Bucket Descriptor: Not Supported 00:30:06.930 SGL Metadata Pointer: Not Supported 00:30:06.930 Oversized SGL: Not Supported 00:30:06.930 SGL Metadata Address: Not Supported 00:30:06.930 SGL Offset: Supported 00:30:06.930 Transport SGL Data Block: Not Supported 00:30:06.930 Replay Protected Memory Block: Not Supported 00:30:06.930 00:30:06.930 Firmware Slot Information 00:30:06.930 ========================= 00:30:06.930 Active slot: 0 00:30:06.930 00:30:06.930 00:30:06.930 Error Log 00:30:06.930 ========= 00:30:06.930 00:30:06.930 Active Namespaces 00:30:06.930 ================= 00:30:06.930 Discovery Log Page 00:30:06.930 ================== 00:30:06.930 Generation Counter: 2 00:30:06.930 Number of Records: 2 00:30:06.930 Record Format: 0 00:30:06.930 00:30:06.930 Discovery Log Entry 0 00:30:06.930 ---------------------- 00:30:06.930 Transport Type: 3 (TCP) 00:30:06.930 Address Family: 1 (IPv4) 00:30:06.930 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:06.930 Entry Flags: 00:30:06.930 Duplicate Returned Information: 1 00:30:06.930 Explicit Persistent Connection Support for Discovery: 1 00:30:06.930 Transport Requirements: 00:30:06.930 Secure Channel: Not Required 00:30:06.930 Port ID: 0 (0x0000) 00:30:06.930 Controller ID: 65535 (0xffff) 00:30:06.930 Admin Max SQ Size: 128 00:30:06.930 Transport Service Identifier: 4420 00:30:06.930 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:06.930 Transport Address: 10.0.0.2 00:30:06.930 Discovery Log Entry 1 00:30:06.930 ---------------------- 00:30:06.930 Transport Type: 3 (TCP) 00:30:06.930 Address Family: 1 (IPv4) 00:30:06.930 Subsystem Type: 2 (NVM Subsystem) 00:30:06.930 Entry Flags: 00:30:06.930 Duplicate Returned Information: 0 00:30:06.930 Explicit Persistent Connection Support for Discovery: 0 00:30:06.930 Transport Requirements: 00:30:06.930 Secure Channel: Not Required 00:30:06.930 Port ID: 0 (0x0000) 00:30:06.930 Controller ID: 65535 (0xffff) 00:30:06.930 Admin Max SQ Size: 128 00:30:06.930 Transport Service Identifier: 4420 00:30:06.930 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:06.930 Transport Address: 10.0.0.2 [2024-11-05 16:54:13.820860] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:06.930 [2024-11-05 16:54:13.820871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9100) on tqpair=0x1047690 00:30:06.930 [2024-11-05 16:54:13.820878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.930 [2024-11-05 16:54:13.820883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9280) on tqpair=0x1047690 00:30:06.930 [2024-11-05 16:54:13.820888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.930 [2024-11-05 16:54:13.820893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9400) on tqpair=0x1047690 00:30:06.930 [2024-11-05 16:54:13.820898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.930 [2024-11-05 16:54:13.820903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.930 [2024-11-05 16:54:13.820908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.930 [2024-11-05 16:54:13.820917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.930 [2024-11-05 16:54:13.820921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.930 [2024-11-05 16:54:13.820925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.930 [2024-11-05 16:54:13.820933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.930 [2024-11-05 16:54:13.820947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.930 [2024-11-05 16:54:13.821166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.821173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.821178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.821192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.821206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.821220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.821400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.821406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.821410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.821419] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:06.931 [2024-11-05 16:54:13.821423] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:06.931 [2024-11-05 16:54:13.821432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.821447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.821457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.821615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.821621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.821625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.821639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.821653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.821663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.821886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.821893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.821896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.821910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.821918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.821925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.821935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.822138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.822145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.822148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.822161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.822176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.822186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.822358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.822365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.822368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.822382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.822396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.822406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.822575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.822582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.822585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.822599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.822613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.822623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.822806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.822813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.822817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.822831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.822838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.822845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.822855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.823038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.823046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.823049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.823063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.823077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.823087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.823263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.823270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.823273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.823287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.823301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.823311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.823491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.823497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.823501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.823514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.931 [2024-11-05 16:54:13.823528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.931 [2024-11-05 16:54:13.823538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.931 [2024-11-05 16:54:13.823715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.931 [2024-11-05 16:54:13.823722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.931 [2024-11-05 16:54:13.823725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.931 [2024-11-05 16:54:13.823739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.931 [2024-11-05 16:54:13.823743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.823750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.932 [2024-11-05 16:54:13.823757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.823767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.932 [2024-11-05 16:54:13.823956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.823962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.823967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.823971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.932 [2024-11-05 16:54:13.823981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.823985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.823989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.932 [2024-11-05 16:54:13.823995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.824005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.932 [2024-11-05 16:54:13.824164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.824170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.824174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.932 [2024-11-05 16:54:13.824187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.932 [2024-11-05 16:54:13.824201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.824211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.932 [2024-11-05 16:54:13.824385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.824391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.824395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.932 [2024-11-05 16:54:13.824408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.932 [2024-11-05 16:54:13.824422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.824432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.932 [2024-11-05 16:54:13.824601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.824608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.824611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.932 [2024-11-05 16:54:13.824625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.824632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1047690) 00:30:06.932 [2024-11-05 16:54:13.824639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.824649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a9580, cid 3, qid 0 00:30:06.932 [2024-11-05 16:54:13.828752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.828760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.828764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.828770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a9580) on tqpair=0x1047690 00:30:06.932 [2024-11-05 16:54:13.828778] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:30:06.932 00:30:06.932 16:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:06.932 [2024-11-05 16:54:13.873081] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:30:06.932 [2024-11-05 16:54:13.873121] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272335 ] 00:30:06.932 [2024-11-05 16:54:13.926779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:06.932 [2024-11-05 16:54:13.926828] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.932 [2024-11-05 16:54:13.926834] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.932 [2024-11-05 16:54:13.926846] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.932 [2024-11-05 16:54:13.926854] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.932 [2024-11-05 16:54:13.930945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:06.932 [2024-11-05 16:54:13.930975] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1caa690 0 00:30:06.932 [2024-11-05 16:54:13.938811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.932 [2024-11-05 16:54:13.938823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.932 [2024-11-05 16:54:13.938828] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.932 [2024-11-05 16:54:13.938831] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.932 [2024-11-05 16:54:13.938858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.938864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.938868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.932 [2024-11-05 16:54:13.938880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.932 [2024-11-05 16:54:13.938897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.932 [2024-11-05 16:54:13.946756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.946766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.946770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.946775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.932 [2024-11-05 16:54:13.946787] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.932 [2024-11-05 16:54:13.946794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:06.932 [2024-11-05 16:54:13.946801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:06.932 [2024-11-05 16:54:13.946813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.946818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.946822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.932 [2024-11-05 16:54:13.946834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.946848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.932 [2024-11-05 16:54:13.946953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.946960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.946965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.946970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.932 [2024-11-05 16:54:13.946976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:06.932 [2024-11-05 16:54:13.946983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:06.932 [2024-11-05 16:54:13.946990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.946994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.946998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.932 [2024-11-05 16:54:13.947005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.947017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.932 [2024-11-05 16:54:13.947188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.932 [2024-11-05 16:54:13.947195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.932 [2024-11-05 16:54:13.947199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.947203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.932 [2024-11-05 16:54:13.947208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:06.932 [2024-11-05 16:54:13.947216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.932 [2024-11-05 16:54:13.947223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.947227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.932 [2024-11-05 16:54:13.947230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.932 [2024-11-05 16:54:13.947237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.932 [2024-11-05 16:54:13.947248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.932 [2024-11-05 16:54:13.947455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.933 [2024-11-05 16:54:13.947462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.933 [2024-11-05 16:54:13.947466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.933 [2024-11-05 16:54:13.947475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.933 [2024-11-05 16:54:13.947484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.947499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.933 [2024-11-05 16:54:13.947509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.933 [2024-11-05 16:54:13.947675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.933 [2024-11-05 16:54:13.947682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.933 [2024-11-05 16:54:13.947685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.933 [2024-11-05 16:54:13.947694] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.933 [2024-11-05 16:54:13.947699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.933 [2024-11-05 16:54:13.947707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.933 [2024-11-05 16:54:13.947815] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:06.933 [2024-11-05 16:54:13.947820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.933 [2024-11-05 16:54:13.947828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.947842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.933 [2024-11-05 16:54:13.947853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.933 [2024-11-05 16:54:13.947933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.933 [2024-11-05 16:54:13.947940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.933 [2024-11-05 16:54:13.947943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.933 [2024-11-05 16:54:13.947952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.933 [2024-11-05 16:54:13.947961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.947969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.947976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.933 [2024-11-05 16:54:13.947986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.933 [2024-11-05 16:54:13.948196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.933 [2024-11-05 16:54:13.948203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.933 [2024-11-05 16:54:13.948206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.948210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.933 [2024-11-05 16:54:13.948215] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.933 [2024-11-05 16:54:13.948219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.933 [2024-11-05 16:54:13.948227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:06.933 [2024-11-05 16:54:13.948234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.933 [2024-11-05 16:54:13.948246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.948250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.948257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.933 [2024-11-05 16:54:13.948268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.933 [2024-11-05 16:54:13.948469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.933 [2024-11-05 16:54:13.948475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.933 [2024-11-05 16:54:13.948479] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.948483] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=4096, cccid=0 00:30:06.933 [2024-11-05 16:54:13.948488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0c100) on tqpair(0x1caa690): expected_datao=0, payload_size=4096 00:30:06.933 [2024-11-05 16:54:13.948492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.948507] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.948511] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.988887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.933 [2024-11-05 16:54:13.988897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.933 [2024-11-05 16:54:13.988901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.988905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.933 [2024-11-05 16:54:13.988913] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:06.933 [2024-11-05 16:54:13.988918] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:06.933 [2024-11-05 16:54:13.988923] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:06.933 [2024-11-05 16:54:13.988927] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:06.933 [2024-11-05 16:54:13.988935] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:06.933 [2024-11-05 16:54:13.988941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.933 [2024-11-05 16:54:13.988949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.933 [2024-11-05 16:54:13.988956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.988960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.988964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.988972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.933 [2024-11-05 16:54:13.988984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.933 [2024-11-05 16:54:13.989146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.933 [2024-11-05 16:54:13.989154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.933 [2024-11-05 16:54:13.989157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.989162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:06.933 [2024-11-05 16:54:13.989171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.989176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.989181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.989188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.933 [2024-11-05 16:54:13.989194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.989199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.933 [2024-11-05 16:54:13.989203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1caa690) 00:30:06.933 [2024-11-05 16:54:13.989209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.934 [2024-11-05 16:54:13.989215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1caa690) 00:30:06.934 [2024-11-05 16:54:13.989228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.934 [2024-11-05 16:54:13.989234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1caa690) 00:30:06.934 [2024-11-05 16:54:13.989249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.934 [2024-11-05 16:54:13.989254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.934 [2024-11-05 16:54:13.989263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.934 [2024-11-05 16:54:13.989269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:06.934 [2024-11-05 16:54:13.989280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.934 [2024-11-05 16:54:13.989294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c100, cid 0, qid 0 00:30:06.934 [2024-11-05 16:54:13.989299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c280, cid 1, qid 0 00:30:06.934 [2024-11-05 16:54:13.989304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c400, cid 2, qid 0 00:30:06.934 [2024-11-05 16:54:13.989309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c580, cid 3, qid 0 00:30:06.934 [2024-11-05 16:54:13.989316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:06.934 [2024-11-05 16:54:13.989512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.934 [2024-11-05 16:54:13.989519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.934 [2024-11-05 16:54:13.989523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:06.934 [2024-11-05 16:54:13.989534] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:06.934 [2024-11-05 16:54:13.989540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:06.934 [2024-11-05 16:54:13.989549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:06.934 [2024-11-05 16:54:13.989556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:06.934 [2024-11-05 16:54:13.989564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.934 [2024-11-05 16:54:13.989573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:06.934 [2024-11-05 16:54:13.989580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.934 [2024-11-05 16:54:13.989591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:06.934 [2024-11-05 16:54:13.989737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.989744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.198 [2024-11-05 16:54:13.993754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.993759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:07.198 [2024-11-05 16:54:13.993824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.993834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.993842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.993846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:07.198 [2024-11-05 16:54:13.993853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.198 [2024-11-05 16:54:13.993864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:07.198 [2024-11-05 16:54:13.993949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.198 [2024-11-05 16:54:13.993956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.198 [2024-11-05 16:54:13.993960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.993964] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=4096, cccid=4 00:30:07.198 [2024-11-05 16:54:13.993968] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0c700) on tqpair(0x1caa690): expected_datao=0, payload_size=4096 00:30:07.198 [2024-11-05 16:54:13.993973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.993980] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.993983] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.994201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.198 [2024-11-05 16:54:13.994205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:07.198 [2024-11-05 16:54:13.994218] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:07.198 [2024-11-05 16:54:13.994231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.994241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.994248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:07.198 [2024-11-05 16:54:13.994259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.198 [2024-11-05 16:54:13.994272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:07.198 [2024-11-05 16:54:13.994468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.198 [2024-11-05 16:54:13.994476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.198 [2024-11-05 16:54:13.994480] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994484] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=4096, cccid=4 00:30:07.198 [2024-11-05 16:54:13.994489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0c700) on tqpair(0x1caa690): expected_datao=0, payload_size=4096 00:30:07.198 [2024-11-05 16:54:13.994493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994500] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994503] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.994673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.198 [2024-11-05 16:54:13.994677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:07.198 [2024-11-05 16:54:13.994693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.994703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.994710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:07.198 [2024-11-05 16:54:13.994720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.198 [2024-11-05 16:54:13.994731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:07.198 [2024-11-05 16:54:13.994933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.198 [2024-11-05 16:54:13.994941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.198 [2024-11-05 16:54:13.994944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994948] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=4096, cccid=4 00:30:07.198 [2024-11-05 16:54:13.994952] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0c700) on tqpair(0x1caa690): expected_datao=0, payload_size=4096 00:30:07.198 [2024-11-05 16:54:13.994956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994963] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.994967] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.995193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.198 [2024-11-05 16:54:13.995197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:07.198 [2024-11-05 16:54:13.995208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995249] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:07.198 [2024-11-05 16:54:13.995254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:07.198 [2024-11-05 16:54:13.995259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:07.198 [2024-11-05 16:54:13.995273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:07.198 [2024-11-05 16:54:13.995284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.198 [2024-11-05 16:54:13.995291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1caa690) 00:30:07.198 [2024-11-05 16:54:13.995304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.198 [2024-11-05 16:54:13.995318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:07.198 [2024-11-05 16:54:13.995323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c880, cid 5, qid 0 00:30:07.198 [2024-11-05 16:54:13.995503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.995509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.198 [2024-11-05 16:54:13.995512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:07.198 [2024-11-05 16:54:13.995523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.995529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.198 [2024-11-05 16:54:13.995533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c880) on tqpair=0x1caa690 00:30:07.198 [2024-11-05 16:54:13.995545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.198 [2024-11-05 16:54:13.995550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1caa690) 00:30:07.198 [2024-11-05 16:54:13.995556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.198 [2024-11-05 16:54:13.995566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c880, cid 5, qid 0 00:30:07.198 [2024-11-05 16:54:13.995722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.198 [2024-11-05 16:54:13.995729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.995732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.995736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c880) on tqpair=0x1caa690 00:30:07.199 [2024-11-05 16:54:13.995749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.995753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1caa690) 00:30:07.199 [2024-11-05 16:54:13.995760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.199 [2024-11-05 16:54:13.995772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c880, cid 5, qid 0 00:30:07.199 [2024-11-05 16:54:13.995973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.199 [2024-11-05 16:54:13.995979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.995983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.995987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c880) on tqpair=0x1caa690 00:30:07.199 [2024-11-05 16:54:13.995995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.995999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1caa690) 00:30:07.199 [2024-11-05 16:54:13.996006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.199 [2024-11-05 16:54:13.996015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c880, cid 5, qid 0 00:30:07.199 [2024-11-05 16:54:13.996066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.199 [2024-11-05 16:54:13.996073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.996076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c880) on tqpair=0x1caa690 00:30:07.199 [2024-11-05 16:54:13.996093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1caa690) 00:30:07.199 [2024-11-05 16:54:13.996104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.199 [2024-11-05 16:54:13.996112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1caa690) 00:30:07.199 [2024-11-05 16:54:13.996122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.199 [2024-11-05 16:54:13.996129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1caa690) 00:30:07.199 [2024-11-05 16:54:13.996139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.199 [2024-11-05 16:54:13.996150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1caa690) 00:30:07.199 [2024-11-05 16:54:13.996160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.199 [2024-11-05 16:54:13.996172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c880, cid 5, qid 0 00:30:07.199 [2024-11-05 16:54:13.996177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c700, cid 4, qid 0 00:30:07.199 [2024-11-05 16:54:13.996182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0ca00, cid 6, qid 0 00:30:07.199 [2024-11-05 16:54:13.996187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0cb80, cid 7, qid 0 00:30:07.199 [2024-11-05 16:54:13.996426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.199 [2024-11-05 16:54:13.996433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.199 [2024-11-05 16:54:13.996436] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996440] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=8192, cccid=5 00:30:07.199 [2024-11-05 16:54:13.996447] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0c880) on tqpair(0x1caa690): expected_datao=0, payload_size=8192 00:30:07.199 [2024-11-05 16:54:13.996451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996546] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996550] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.199 [2024-11-05 16:54:13.996562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.199 [2024-11-05 16:54:13.996565] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996569] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=512, cccid=4 00:30:07.199 [2024-11-05 16:54:13.996573] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0c700) on tqpair(0x1caa690): expected_datao=0, payload_size=512 00:30:07.199 [2024-11-05 16:54:13.996578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996584] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.199 [2024-11-05 16:54:13.996599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.199 [2024-11-05 16:54:13.996602] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=512, cccid=6 00:30:07.199 [2024-11-05 16:54:13.996610] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0ca00) on tqpair(0x1caa690): expected_datao=0, payload_size=512 00:30:07.199 [2024-11-05 16:54:13.996615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996625] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:07.199 [2024-11-05 16:54:13.996636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:07.199 [2024-11-05 16:54:13.996639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996643] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1caa690): datao=0, datal=4096, cccid=7 00:30:07.199 [2024-11-05 16:54:13.996647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d0cb80) on tqpair(0x1caa690): expected_datao=0, payload_size=4096 00:30:07.199 [2024-11-05 16:54:13.996652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996663] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996667] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.199 [2024-11-05 16:54:13.996683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.996686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c880) on tqpair=0x1caa690 00:30:07.199 [2024-11-05 16:54:13.996701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.199 [2024-11-05 16:54:13.996707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.996711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c700) on tqpair=0x1caa690 00:30:07.199 [2024-11-05 16:54:13.996725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.199 [2024-11-05 16:54:13.996731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.996735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0ca00) on tqpair=0x1caa690 00:30:07.199 [2024-11-05 16:54:13.996750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.199 [2024-11-05 16:54:13.996757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.199 [2024-11-05 16:54:13.996760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.199 [2024-11-05 16:54:13.996764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0cb80) on tqpair=0x1caa690 00:30:07.199 ===================================================== 00:30:07.199 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.199 ===================================================== 00:30:07.199 Controller Capabilities/Features 00:30:07.199 ================================ 00:30:07.199 Vendor ID: 8086 00:30:07.199 Subsystem Vendor ID: 8086 00:30:07.199 Serial Number: SPDK00000000000001 00:30:07.199 Model Number: SPDK bdev Controller 00:30:07.199 Firmware Version: 25.01 00:30:07.199 Recommended Arb Burst: 6 00:30:07.199 IEEE OUI Identifier: e4 d2 5c 00:30:07.199 Multi-path I/O 00:30:07.199 May have multiple subsystem ports: Yes 00:30:07.199 May have multiple controllers: Yes 00:30:07.199 Associated with SR-IOV VF: No 00:30:07.199 Max Data Transfer Size: 131072 00:30:07.199 Max Number of Namespaces: 32 00:30:07.199 Max Number of I/O Queues: 127 00:30:07.199 NVMe Specification Version (VS): 1.3 00:30:07.199 NVMe Specification Version (Identify): 1.3 00:30:07.199 Maximum Queue Entries: 128 00:30:07.199 Contiguous Queues Required: Yes 00:30:07.199 Arbitration Mechanisms Supported 00:30:07.199 Weighted Round Robin: Not Supported 00:30:07.199 Vendor Specific: Not Supported 00:30:07.199 Reset Timeout: 15000 ms 00:30:07.199 Doorbell Stride: 4 bytes 00:30:07.199 NVM Subsystem Reset: Not Supported 00:30:07.199 Command Sets Supported 00:30:07.199 NVM Command Set: Supported 00:30:07.199 Boot Partition: Not Supported 00:30:07.199 Memory Page Size Minimum: 4096 bytes 00:30:07.199 Memory Page Size Maximum: 4096 bytes 00:30:07.199 Persistent Memory Region: Not Supported 00:30:07.199 Optional Asynchronous Events Supported 00:30:07.199 Namespace Attribute Notices: Supported 00:30:07.199 Firmware Activation Notices: Not Supported 00:30:07.199 ANA Change Notices: Not Supported 00:30:07.199 PLE Aggregate Log Change Notices: Not Supported 00:30:07.199 LBA Status Info Alert Notices: Not Supported 00:30:07.199 EGE Aggregate Log Change Notices: Not Supported 00:30:07.200 Normal NVM Subsystem Shutdown event: Not Supported 00:30:07.200 Zone Descriptor Change Notices: Not Supported 00:30:07.200 Discovery Log Change Notices: Not Supported 00:30:07.200 Controller Attributes 00:30:07.200 128-bit Host Identifier: Supported 00:30:07.200 Non-Operational Permissive Mode: Not Supported 00:30:07.200 NVM Sets: Not Supported 00:30:07.200 Read Recovery Levels: Not Supported 00:30:07.200 Endurance Groups: Not Supported 00:30:07.200 Predictable Latency Mode: Not Supported 00:30:07.200 Traffic Based Keep ALive: Not Supported 00:30:07.200 Namespace Granularity: Not Supported 00:30:07.200 SQ Associations: Not Supported 00:30:07.200 UUID List: Not Supported 00:30:07.200 Multi-Domain Subsystem: Not Supported 00:30:07.200 Fixed Capacity Management: Not Supported 00:30:07.200 Variable Capacity Management: Not Supported 00:30:07.200 Delete Endurance Group: Not Supported 00:30:07.200 Delete NVM Set: Not Supported 00:30:07.200 Extended LBA Formats Supported: Not Supported 00:30:07.200 Flexible Data Placement Supported: Not Supported 00:30:07.200 00:30:07.200 Controller Memory Buffer Support 00:30:07.200 ================================ 00:30:07.200 Supported: No 00:30:07.200 00:30:07.200 Persistent Memory Region Support 00:30:07.200 ================================ 00:30:07.200 Supported: No 00:30:07.200 00:30:07.200 Admin Command Set Attributes 00:30:07.200 ============================ 00:30:07.200 Security Send/Receive: Not Supported 00:30:07.200 Format NVM: Not Supported 00:30:07.200 Firmware Activate/Download: Not Supported 00:30:07.200 Namespace Management: Not Supported 00:30:07.200 Device Self-Test: Not Supported 00:30:07.200 Directives: Not Supported 00:30:07.200 NVMe-MI: Not Supported 00:30:07.200 Virtualization Management: Not Supported 00:30:07.200 Doorbell Buffer Config: Not Supported 00:30:07.200 Get LBA Status Capability: Not Supported 00:30:07.200 Command & Feature Lockdown Capability: Not Supported 00:30:07.200 Abort Command Limit: 4 00:30:07.200 Async Event Request Limit: 4 00:30:07.200 Number of Firmware Slots: N/A 00:30:07.200 Firmware Slot 1 Read-Only: N/A 00:30:07.200 Firmware Activation Without Reset: N/A 00:30:07.200 Multiple Update Detection Support: N/A 00:30:07.200 Firmware Update Granularity: No Information Provided 00:30:07.200 Per-Namespace SMART Log: No 00:30:07.200 Asymmetric Namespace Access Log Page: Not Supported 00:30:07.200 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:07.200 Command Effects Log Page: Supported 00:30:07.200 Get Log Page Extended Data: Supported 00:30:07.200 Telemetry Log Pages: Not Supported 00:30:07.200 Persistent Event Log Pages: Not Supported 00:30:07.200 Supported Log Pages Log Page: May Support 00:30:07.200 Commands Supported & Effects Log Page: Not Supported 00:30:07.200 Feature Identifiers & Effects Log Page:May Support 00:30:07.200 NVMe-MI Commands & Effects Log Page: May Support 00:30:07.200 Data Area 4 for Telemetry Log: Not Supported 00:30:07.200 Error Log Page Entries Supported: 128 00:30:07.200 Keep Alive: Supported 00:30:07.200 Keep Alive Granularity: 10000 ms 00:30:07.200 00:30:07.200 NVM Command Set Attributes 00:30:07.200 ========================== 00:30:07.200 Submission Queue Entry Size 00:30:07.200 Max: 64 00:30:07.200 Min: 64 00:30:07.200 Completion Queue Entry Size 00:30:07.200 Max: 16 00:30:07.200 Min: 16 00:30:07.200 Number of Namespaces: 32 00:30:07.200 Compare Command: Supported 00:30:07.200 Write Uncorrectable Command: Not Supported 00:30:07.200 Dataset Management Command: Supported 00:30:07.200 Write Zeroes Command: Supported 00:30:07.200 Set Features Save Field: Not Supported 00:30:07.200 Reservations: Supported 00:30:07.200 Timestamp: Not Supported 00:30:07.200 Copy: Supported 00:30:07.200 Volatile Write Cache: Present 00:30:07.200 Atomic Write Unit (Normal): 1 00:30:07.200 Atomic Write Unit (PFail): 1 00:30:07.200 Atomic Compare & Write Unit: 1 00:30:07.200 Fused Compare & Write: Supported 00:30:07.200 Scatter-Gather List 00:30:07.200 SGL Command Set: Supported 00:30:07.200 SGL Keyed: Supported 00:30:07.200 SGL Bit Bucket Descriptor: Not Supported 00:30:07.200 SGL Metadata Pointer: Not Supported 00:30:07.200 Oversized SGL: Not Supported 00:30:07.200 SGL Metadata Address: Not Supported 00:30:07.200 SGL Offset: Supported 00:30:07.200 Transport SGL Data Block: Not Supported 00:30:07.200 Replay Protected Memory Block: Not Supported 00:30:07.200 00:30:07.200 Firmware Slot Information 00:30:07.200 ========================= 00:30:07.200 Active slot: 1 00:30:07.200 Slot 1 Firmware Revision: 25.01 00:30:07.200 00:30:07.200 00:30:07.200 Commands Supported and Effects 00:30:07.200 ============================== 00:30:07.200 Admin Commands 00:30:07.200 -------------- 00:30:07.200 Get Log Page (02h): Supported 00:30:07.200 Identify (06h): Supported 00:30:07.200 Abort (08h): Supported 00:30:07.200 Set Features (09h): Supported 00:30:07.200 Get Features (0Ah): Supported 00:30:07.200 Asynchronous Event Request (0Ch): Supported 00:30:07.200 Keep Alive (18h): Supported 00:30:07.200 I/O Commands 00:30:07.200 ------------ 00:30:07.200 Flush (00h): Supported LBA-Change 00:30:07.200 Write (01h): Supported LBA-Change 00:30:07.200 Read (02h): Supported 00:30:07.200 Compare (05h): Supported 00:30:07.200 Write Zeroes (08h): Supported LBA-Change 00:30:07.200 Dataset Management (09h): Supported LBA-Change 00:30:07.200 Copy (19h): Supported LBA-Change 00:30:07.200 00:30:07.200 Error Log 00:30:07.200 ========= 00:30:07.200 00:30:07.200 Arbitration 00:30:07.200 =========== 00:30:07.200 Arbitration Burst: 1 00:30:07.200 00:30:07.200 Power Management 00:30:07.200 ================ 00:30:07.200 Number of Power States: 1 00:30:07.200 Current Power State: Power State #0 00:30:07.200 Power State #0: 00:30:07.200 Max Power: 0.00 W 00:30:07.200 Non-Operational State: Operational 00:30:07.200 Entry Latency: Not Reported 00:30:07.200 Exit Latency: Not Reported 00:30:07.200 Relative Read Throughput: 0 00:30:07.200 Relative Read Latency: 0 00:30:07.200 Relative Write Throughput: 0 00:30:07.200 Relative Write Latency: 0 00:30:07.200 Idle Power: Not Reported 00:30:07.200 Active Power: Not Reported 00:30:07.200 Non-Operational Permissive Mode: Not Supported 00:30:07.200 00:30:07.200 Health Information 00:30:07.200 ================== 00:30:07.200 Critical Warnings: 00:30:07.200 Available Spare Space: OK 00:30:07.200 Temperature: OK 00:30:07.200 Device Reliability: OK 00:30:07.200 Read Only: No 00:30:07.200 Volatile Memory Backup: OK 00:30:07.200 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:07.200 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:07.200 Available Spare: 0% 00:30:07.200 Available Spare Threshold: 0% 00:30:07.200 Life Percentage Used:[2024-11-05 16:54:13.996861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.200 [2024-11-05 16:54:13.996867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1caa690) 00:30:07.200 [2024-11-05 16:54:13.996873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.200 [2024-11-05 16:54:13.996885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0cb80, cid 7, qid 0 00:30:07.200 [2024-11-05 16:54:13.997296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.200 [2024-11-05 16:54:13.997303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.200 [2024-11-05 16:54:13.997307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.200 [2024-11-05 16:54:13.997311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0cb80) on tqpair=0x1caa690 00:30:07.200 [2024-11-05 16:54:13.997342] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:07.200 [2024-11-05 16:54:13.997352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c100) on tqpair=0x1caa690 00:30:07.200 [2024-11-05 16:54:13.997358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.200 [2024-11-05 16:54:13.997363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c280) on tqpair=0x1caa690 00:30:07.200 [2024-11-05 16:54:13.997368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.200 [2024-11-05 16:54:13.997373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c400) on tqpair=0x1caa690 00:30:07.200 [2024-11-05 16:54:13.997378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.200 [2024-11-05 16:54:13.997383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c580) on tqpair=0x1caa690 00:30:07.200 [2024-11-05 16:54:13.997388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.200 [2024-11-05 16:54:13.997396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.200 [2024-11-05 16:54:13.997400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.200 [2024-11-05 16:54:13.997404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1caa690) 00:30:07.200 [2024-11-05 16:54:13.997411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.200 [2024-11-05 16:54:13.997423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c580, cid 3, qid 0 00:30:07.201 [2024-11-05 16:54:13.997617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.201 [2024-11-05 16:54:13.997625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.201 [2024-11-05 16:54:13.997628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:13.997632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c580) on tqpair=0x1caa690 00:30:07.201 [2024-11-05 16:54:13.997639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:13.997643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:13.997646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1caa690) 00:30:07.201 [2024-11-05 16:54:13.997655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.201 [2024-11-05 16:54:13.997668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c580, cid 3, qid 0 00:30:07.201 [2024-11-05 16:54:14.001753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.201 [2024-11-05 16:54:14.001763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.201 [2024-11-05 16:54:14.001766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:14.001770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c580) on tqpair=0x1caa690 00:30:07.201 [2024-11-05 16:54:14.001775] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:07.201 [2024-11-05 16:54:14.001780] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:07.201 [2024-11-05 16:54:14.001790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:14.001794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:14.001798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1caa690) 00:30:07.201 [2024-11-05 16:54:14.001805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.201 [2024-11-05 16:54:14.001816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d0c580, cid 3, qid 0 00:30:07.201 [2024-11-05 16:54:14.001888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:07.201 [2024-11-05 16:54:14.001895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:07.201 [2024-11-05 16:54:14.001898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:07.201 [2024-11-05 16:54:14.001902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d0c580) on tqpair=0x1caa690 00:30:07.201 [2024-11-05 16:54:14.001910] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:07.201 0% 00:30:07.201 Data Units Read: 0 00:30:07.201 Data Units Written: 0 00:30:07.201 Host Read Commands: 0 00:30:07.201 Host Write Commands: 0 00:30:07.201 Controller Busy Time: 0 minutes 00:30:07.201 Power Cycles: 0 00:30:07.201 Power On Hours: 0 hours 00:30:07.201 Unsafe Shutdowns: 0 00:30:07.201 Unrecoverable Media Errors: 0 00:30:07.201 Lifetime Error Log Entries: 0 00:30:07.201 Warning Temperature Time: 0 minutes 00:30:07.201 Critical Temperature Time: 0 minutes 00:30:07.201 00:30:07.201 Number of Queues 00:30:07.201 ================ 00:30:07.201 Number of I/O Submission Queues: 127 00:30:07.201 Number of I/O Completion Queues: 127 00:30:07.201 00:30:07.201 Active Namespaces 00:30:07.201 ================= 00:30:07.201 Namespace ID:1 00:30:07.201 Error Recovery Timeout: Unlimited 00:30:07.201 Command Set Identifier: NVM (00h) 00:30:07.201 Deallocate: Supported 00:30:07.201 Deallocated/Unwritten Error: Not Supported 00:30:07.201 Deallocated Read Value: Unknown 00:30:07.201 Deallocate in Write Zeroes: Not Supported 00:30:07.201 Deallocated Guard Field: 0xFFFF 00:30:07.201 Flush: Supported 00:30:07.201 Reservation: Supported 00:30:07.201 Namespace Sharing Capabilities: Multiple Controllers 00:30:07.201 Size (in LBAs): 131072 (0GiB) 00:30:07.201 Capacity (in LBAs): 131072 (0GiB) 00:30:07.201 Utilization (in LBAs): 131072 (0GiB) 00:30:07.201 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:07.201 EUI64: ABCDEF0123456789 00:30:07.201 UUID: bc9e7752-bae6-43a9-a16e-27e56129d8ca 00:30:07.201 Thin Provisioning: Not Supported 00:30:07.201 Per-NS Atomic Units: Yes 00:30:07.201 Atomic Boundary Size (Normal): 0 00:30:07.201 Atomic Boundary Size (PFail): 0 00:30:07.201 Atomic Boundary Offset: 0 00:30:07.201 Maximum Single Source Range Length: 65535 00:30:07.201 Maximum Copy Length: 65535 00:30:07.201 Maximum Source Range Count: 1 00:30:07.201 NGUID/EUI64 Never Reused: No 00:30:07.201 Namespace Write Protected: No 00:30:07.201 Number of LBA Formats: 1 00:30:07.201 Current LBA Format: LBA Format #00 00:30:07.201 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:07.201 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:07.201 rmmod nvme_tcp 00:30:07.201 rmmod nvme_fabrics 00:30:07.201 rmmod nvme_keyring 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 3272138 ']' 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 3272138 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3272138 ']' 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3272138 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272138 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272138' 00:30:07.201 killing process with pid 3272138 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3272138 00:30:07.201 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3272138 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:07.462 16:54:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:09.375 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:30:09.375 00:30:09.375 real 0m11.565s 00:30:09.376 user 0m8.547s 00:30:09.376 sys 0m6.012s 00:30:09.376 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:09.376 16:54:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:09.376 ************************************ 00:30:09.376 END TEST nvmf_identify 00:30:09.376 ************************************ 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.637 ************************************ 00:30:09.637 START TEST nvmf_perf 00:30:09.637 ************************************ 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:09.637 * Looking for test storage... 00:30:09.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.637 --rc genhtml_branch_coverage=1 00:30:09.637 --rc genhtml_function_coverage=1 00:30:09.637 --rc genhtml_legend=1 00:30:09.637 --rc geninfo_all_blocks=1 00:30:09.637 --rc geninfo_unexecuted_blocks=1 00:30:09.637 00:30:09.637 ' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.637 --rc genhtml_branch_coverage=1 00:30:09.637 --rc genhtml_function_coverage=1 00:30:09.637 --rc genhtml_legend=1 00:30:09.637 --rc geninfo_all_blocks=1 00:30:09.637 --rc geninfo_unexecuted_blocks=1 00:30:09.637 00:30:09.637 ' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.637 --rc genhtml_branch_coverage=1 00:30:09.637 --rc genhtml_function_coverage=1 00:30:09.637 --rc genhtml_legend=1 00:30:09.637 --rc geninfo_all_blocks=1 00:30:09.637 --rc geninfo_unexecuted_blocks=1 00:30:09.637 00:30:09.637 ' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.637 --rc genhtml_branch_coverage=1 00:30:09.637 --rc genhtml_function_coverage=1 00:30:09.637 --rc genhtml_legend=1 00:30:09.637 --rc geninfo_all_blocks=1 00:30:09.637 --rc geninfo_unexecuted_blocks=1 00:30:09.637 00:30:09.637 ' 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.637 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:09.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:30:09.899 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:18.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:18.044 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:18.044 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:18.044 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:18.044 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@247 -- # create_target_ns 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:18.045 10.0.0.1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:18.045 10.0.0.2 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:18.045 16:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:18.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.580 ms 00:30:18.045 00:30:18.045 --- 10.0.0.1 ping statistics --- 00:30:18.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.045 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:18.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:30:18.045 00:30:18.045 --- 10.0.0.2 ping statistics --- 00:30:18.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.045 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:30:18.045 ' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=3276681 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 3276681 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3276681 ']' 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:18.045 16:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:18.045 [2024-11-05 16:54:24.318449] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:30:18.046 [2024-11-05 16:54:24.318520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.046 [2024-11-05 16:54:24.401798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.046 [2024-11-05 16:54:24.443327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.046 [2024-11-05 16:54:24.443362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.046 [2024-11-05 16:54:24.443370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.046 [2024-11-05 16:54:24.443377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.046 [2024-11-05 16:54:24.443383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.046 [2024-11-05 16:54:24.445012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.046 [2024-11-05 16:54:24.445129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.046 [2024-11-05 16:54:24.445287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.046 [2024-11-05 16:54:24.445288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:18.307 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:18.879 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:18.879 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:18.879 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:30:18.879 16:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.140 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:19.140 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:30:19.140 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:19.140 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:19.140 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:19.140 [2024-11-05 16:54:26.203744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.401 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.401 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:19.401 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:19.662 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:19.662 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:19.922 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.922 [2024-11-05 16:54:26.926397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.922 16:54:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.183 16:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:30:20.183 16:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:20.183 16:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:20.183 16:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:21.567 Initializing NVMe Controllers 00:30:21.567 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:30:21.567 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:30:21.567 Initialization complete. Launching workers. 00:30:21.567 ======================================================== 00:30:21.567 Latency(us) 00:30:21.567 Device Information : IOPS MiB/s Average min max 00:30:21.567 PCIE (0000:65:00.0) NSID 1 from core 0: 79472.29 310.44 402.03 13.22 4818.07 00:30:21.567 ======================================================== 00:30:21.567 Total : 79472.29 310.44 402.03 13.22 4818.07 00:30:21.567 00:30:21.567 16:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.951 Initializing NVMe Controllers 00:30:22.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:22.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:22.951 Initialization complete. Launching workers. 00:30:22.951 ======================================================== 00:30:22.951 Latency(us) 00:30:22.951 Device Information : IOPS MiB/s Average min max 00:30:22.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 11111.77 262.33 45962.83 00:30:22.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17123.23 7016.44 48813.86 00:30:22.951 ======================================================== 00:30:22.951 Total : 155.00 0.61 13477.57 262.33 48813.86 00:30:22.951 00:30:22.951 16:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.337 Initializing NVMe Controllers 00:30:24.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:24.337 Initialization complete. Launching workers. 00:30:24.337 ======================================================== 00:30:24.337 Latency(us) 00:30:24.337 Device Information : IOPS MiB/s Average min max 00:30:24.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10417.18 40.69 3071.35 539.49 8202.10 00:30:24.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3818.33 14.92 8437.49 5183.12 18080.88 00:30:24.337 ======================================================== 00:30:24.337 Total : 14235.51 55.61 4510.69 539.49 18080.88 00:30:24.337 00:30:24.337 16:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:24.337 16:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:24.337 16:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.881 Initializing NVMe Controllers 00:30:26.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.881 Controller IO queue size 128, less than required. 00:30:26.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.881 Controller IO queue size 128, less than required. 00:30:26.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:26.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:26.881 Initialization complete. Launching workers. 00:30:26.881 ======================================================== 00:30:26.881 Latency(us) 00:30:26.881 Device Information : IOPS MiB/s Average min max 00:30:26.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1579.19 394.80 82555.83 58233.31 155556.70 00:30:26.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.42 151.60 218825.03 73020.76 316331.27 00:30:26.881 ======================================================== 00:30:26.881 Total : 2185.60 546.40 120365.17 58233.31 316331.27 00:30:26.881 00:30:26.881 16:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:26.882 No valid NVMe controllers or AIO or URING devices found 00:30:26.882 Initializing NVMe Controllers 00:30:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.882 Controller IO queue size 128, less than required. 00:30:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.882 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:26.882 Controller IO queue size 128, less than required. 00:30:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.882 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:26.882 WARNING: Some requested NVMe devices were skipped 00:30:26.882 16:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:29.429 Initializing NVMe Controllers 00:30:29.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.429 Controller IO queue size 128, less than required. 00:30:29.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.429 Controller IO queue size 128, less than required. 00:30:29.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:29.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:29.429 Initialization complete. Launching workers. 00:30:29.429 00:30:29.429 ==================== 00:30:29.429 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:29.429 TCP transport: 00:30:29.429 polls: 22337 00:30:29.429 idle_polls: 13045 00:30:29.429 sock_completions: 9292 00:30:29.429 nvme_completions: 6233 00:30:29.429 submitted_requests: 9374 00:30:29.429 queued_requests: 1 00:30:29.429 00:30:29.429 ==================== 00:30:29.429 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:29.429 TCP transport: 00:30:29.429 polls: 21561 00:30:29.429 idle_polls: 11650 00:30:29.429 sock_completions: 9911 00:30:29.429 nvme_completions: 6837 00:30:29.429 submitted_requests: 10290 00:30:29.429 queued_requests: 1 00:30:29.429 ======================================================== 00:30:29.429 Latency(us) 00:30:29.429 Device Information : IOPS MiB/s Average min max 00:30:29.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1557.92 389.48 83868.89 37843.18 145619.41 00:30:29.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1708.91 427.23 75744.38 39548.38 126787.24 00:30:29.429 ======================================================== 00:30:29.429 Total : 3266.84 816.71 79618.88 37843.18 145619.41 00:30:29.429 00:30:29.429 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:29.429 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:29.689 rmmod nvme_tcp 00:30:29.689 rmmod nvme_fabrics 00:30:29.689 rmmod nvme_keyring 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:29.689 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 3276681 ']' 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 3276681 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3276681 ']' 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3276681 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:29.690 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3276681 00:30:29.950 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:29.950 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:29.950 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3276681' 00:30:29.950 killing process with pid 3276681 00:30:29.950 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3276681 00:30:29.950 16:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3276681 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:31.864 16:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:30:33.787 00:30:33.787 real 0m24.308s 00:30:33.787 user 0m58.675s 00:30:33.787 sys 0m8.427s 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.787 ************************************ 00:30:33.787 END TEST nvmf_perf 00:30:33.787 ************************************ 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.787 16:54:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.049 ************************************ 00:30:34.049 START TEST nvmf_fio_host 00:30:34.049 ************************************ 00:30:34.049 16:54:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:34.049 * Looking for test storage... 00:30:34.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.049 16:54:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.049 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.049 --rc genhtml_branch_coverage=1 00:30:34.050 --rc genhtml_function_coverage=1 00:30:34.050 --rc genhtml_legend=1 00:30:34.050 --rc geninfo_all_blocks=1 00:30:34.050 --rc geninfo_unexecuted_blocks=1 00:30:34.050 00:30:34.050 ' 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.050 --rc genhtml_branch_coverage=1 00:30:34.050 --rc genhtml_function_coverage=1 00:30:34.050 --rc genhtml_legend=1 00:30:34.050 --rc geninfo_all_blocks=1 00:30:34.050 --rc geninfo_unexecuted_blocks=1 00:30:34.050 00:30:34.050 ' 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.050 --rc genhtml_branch_coverage=1 00:30:34.050 --rc genhtml_function_coverage=1 00:30:34.050 --rc genhtml_legend=1 00:30:34.050 --rc geninfo_all_blocks=1 00:30:34.050 --rc geninfo_unexecuted_blocks=1 00:30:34.050 00:30:34.050 ' 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:34.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.050 --rc genhtml_branch_coverage=1 00:30:34.050 --rc genhtml_function_coverage=1 00:30:34.050 --rc genhtml_legend=1 00:30:34.050 --rc geninfo_all_blocks=1 00:30:34.050 --rc geninfo_unexecuted_blocks=1 00:30:34.050 00:30:34.050 ' 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:34.050 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:34.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:30:34.312 16:54:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:42.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:42.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.461 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:42.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:42.462 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@247 -- # create_target_ns 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:42.462 10.0.0.1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:42.462 10.0.0.2 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:42.462 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:42.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.677 ms 00:30:42.463 00:30:42.463 --- 10.0.0.1 ping statistics --- 00:30:42.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.463 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:42.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:30:42.463 00:30:42.463 --- 10.0.0.2 ping statistics --- 00:30:42.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.463 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:30:42.463 ' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3283761 00:30:42.463 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3283761 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3283761 ']' 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:42.464 16:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.464 [2024-11-05 16:54:48.705250] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:30:42.464 [2024-11-05 16:54:48.705325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.464 [2024-11-05 16:54:48.788858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.464 [2024-11-05 16:54:48.830718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.464 [2024-11-05 16:54:48.830760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.464 [2024-11-05 16:54:48.830769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.464 [2024-11-05 16:54:48.830776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.464 [2024-11-05 16:54:48.830781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.464 [2024-11-05 16:54:48.832606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.464 [2024-11-05 16:54:48.832752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.464 [2024-11-05 16:54:48.832908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.464 [2024-11-05 16:54:48.832909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.464 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.464 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:30:42.464 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:42.725 [2024-11-05 16:54:49.659857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.725 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:42.725 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.725 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.726 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:42.986 Malloc1 00:30:42.986 16:54:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.248 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:43.248 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.509 [2024-11-05 16:54:50.440415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.509 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:43.771 16:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.032 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:44.032 fio-3.35 00:30:44.032 Starting 1 thread 00:30:46.578 00:30:46.578 test: (groupid=0, jobs=1): err= 0: pid=3284360: Tue Nov 5 16:54:53 2024 00:30:46.578 read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(98.6MiB/2004msec) 00:30:46.578 slat (usec): min=2, max=258, avg= 2.15, stdev= 2.29 00:30:46.578 clat (usec): min=3266, max=8987, avg=5586.17, stdev=1000.17 00:30:46.578 lat (usec): min=3296, max=8989, avg=5588.32, stdev=1000.19 00:30:46.578 clat percentiles (usec): 00:30:46.578 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:30:46.578 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5342], 00:30:46.578 | 70.00th=[ 5473], 80.00th=[ 6521], 90.00th=[ 7439], 95.00th=[ 7767], 00:30:46.578 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 8848], 00:30:46.578 | 99.99th=[ 8979] 00:30:46.578 bw ( KiB/s): min=36862, max=55424, per=99.86%, avg=50313.50, stdev=8989.75, samples=4 00:30:46.578 iops : min= 9215, max=13856, avg=12578.25, stdev=2247.69, samples=4 00:30:46.578 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(98.5MiB/2004msec); 0 zone resets 00:30:46.578 slat (usec): min=2, max=237, avg= 2.20, stdev= 1.63 00:30:46.578 clat (usec): min=2562, max=7593, avg=4510.44, stdev=805.79 00:30:46.578 lat (usec): min=2577, max=7595, avg=4512.65, stdev=805.84 00:30:46.578 clat percentiles (usec): 00:30:46.578 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3949], 00:30:46.578 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:30:46.578 | 70.00th=[ 4424], 80.00th=[ 5276], 90.00th=[ 5997], 95.00th=[ 6259], 00:30:46.578 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7046], 99.95th=[ 7177], 00:30:46.578 | 99.99th=[ 7504] 00:30:46.578 bw ( KiB/s): min=37700, max=55808, per=99.96%, avg=50303.00, stdev=8507.92, samples=4 00:30:46.578 iops : min= 9425, max=13952, avg=12575.75, stdev=2126.98, samples=4 00:30:46.578 lat (msec) : 4=12.36%, 10=87.64% 00:30:46.578 cpu : usr=72.49%, sys=26.31%, ctx=42, majf=0, minf=16 00:30:46.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:46.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:46.578 issued rwts: total=25243,25211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:46.578 00:30:46.578 Run status group 0 (all jobs): 00:30:46.578 READ: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=98.6MiB (103MB), run=2004-2004msec 00:30:46.578 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=98.5MiB (103MB), run=2004-2004msec 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:46.578 16:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:46.839 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:46.839 fio-3.35 00:30:46.839 Starting 1 thread 00:30:49.383 [2024-11-05 16:54:56.270323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e18390 is same with the state(6) to be set 00:30:49.383 00:30:49.383 test: (groupid=0, jobs=1): err= 0: pid=3285125: Tue Nov 5 16:54:56 2024 00:30:49.383 read: IOPS=9152, BW=143MiB/s (150MB/s)(287MiB/2004msec) 00:30:49.383 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.60 00:30:49.383 clat (usec): min=2222, max=51301, avg=8518.46, stdev=3750.84 00:30:49.383 lat (usec): min=2225, max=51304, avg=8522.06, stdev=3750.88 00:30:49.383 clat percentiles (usec): 00:30:49.383 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6587], 00:30:49.383 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8225], 60.00th=[ 8717], 00:30:49.383 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10683], 95.00th=[11076], 00:30:49.383 | 99.00th=[13304], 99.50th=[45876], 99.90th=[50070], 99.95th=[50594], 00:30:49.383 | 99.99th=[51119] 00:30:49.383 bw ( KiB/s): min=61984, max=83136, per=49.23%, avg=72088.00, stdev=8657.22, samples=4 00:30:49.383 iops : min= 3874, max= 5196, avg=4505.50, stdev=541.08, samples=4 00:30:49.383 write: IOPS=5463, BW=85.4MiB/s (89.5MB/s)(148MiB/1732msec); 0 zone resets 00:30:49.383 slat (usec): min=39, max=398, avg=40.92, stdev= 7.74 00:30:49.383 clat (usec): min=2661, max=17458, avg=9495.37, stdev=1536.63 00:30:49.383 lat (usec): min=2700, max=17498, avg=9536.29, stdev=1538.01 00:30:49.383 clat percentiles (usec): 00:30:49.383 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8291], 00:30:49.383 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:30:49.383 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:30:49.383 | 99.00th=[14222], 99.50th=[15139], 99.90th=[16581], 99.95th=[17433], 00:30:49.383 | 99.99th=[17433] 00:30:49.383 bw ( KiB/s): min=64416, max=86016, per=86.06%, avg=75232.00, stdev=8858.97, samples=4 00:30:49.383 iops : min= 4026, max= 5376, avg=4702.00, stdev=553.69, samples=4 00:30:49.383 lat (msec) : 4=0.61%, 10=75.85%, 20=23.09%, 50=0.37%, 100=0.09% 00:30:49.383 cpu : usr=84.02%, sys=14.48%, ctx=19, majf=0, minf=32 00:30:49.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:30:49.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:49.383 issued rwts: total=18341,9463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:49.383 00:30:49.383 Run status group 0 (all jobs): 00:30:49.383 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (300MB), run=2004-2004msec 00:30:49.383 WRITE: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=148MiB (155MB), run=1732-1732msec 00:30:49.383 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:49.644 rmmod nvme_tcp 00:30:49.644 rmmod nvme_fabrics 00:30:49.644 rmmod nvme_keyring 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 3283761 ']' 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 3283761 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3283761 ']' 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3283761 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3283761 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3283761' 00:30:49.644 killing process with pid 3283761 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3283761 00:30:49.644 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3283761 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:49.906 16:54:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:30:51.910 00:30:51.910 real 0m17.962s 00:30:51.910 user 1m9.484s 00:30:51.910 sys 0m7.633s 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.910 ************************************ 00:30:51.910 END TEST nvmf_fio_host 00:30:51.910 ************************************ 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.910 ************************************ 00:30:51.910 START TEST nvmf_failover 00:30:51.910 ************************************ 00:30:51.910 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:52.173 * Looking for test storage... 00:30:52.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:52.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.173 --rc genhtml_branch_coverage=1 00:30:52.173 --rc genhtml_function_coverage=1 00:30:52.173 --rc genhtml_legend=1 00:30:52.173 --rc geninfo_all_blocks=1 00:30:52.173 --rc geninfo_unexecuted_blocks=1 00:30:52.173 00:30:52.173 ' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:52.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.173 --rc genhtml_branch_coverage=1 00:30:52.173 --rc genhtml_function_coverage=1 00:30:52.173 --rc genhtml_legend=1 00:30:52.173 --rc geninfo_all_blocks=1 00:30:52.173 --rc geninfo_unexecuted_blocks=1 00:30:52.173 00:30:52.173 ' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:52.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.173 --rc genhtml_branch_coverage=1 00:30:52.173 --rc genhtml_function_coverage=1 00:30:52.173 --rc genhtml_legend=1 00:30:52.173 --rc geninfo_all_blocks=1 00:30:52.173 --rc geninfo_unexecuted_blocks=1 00:30:52.173 00:30:52.173 ' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:52.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.173 --rc genhtml_branch_coverage=1 00:30:52.173 --rc genhtml_function_coverage=1 00:30:52.173 --rc genhtml_legend=1 00:30:52.173 --rc geninfo_all_blocks=1 00:30:52.173 --rc geninfo_unexecuted_blocks=1 00:30:52.173 00:30:52.173 ' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:52.173 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:52.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:30:52.174 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:00.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:00.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:00.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:00.322 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@247 -- # create_target_ns 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:00.322 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:00.323 10.0.0.1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:00.323 10.0.0.2 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:00.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.635 ms 00:31:00.323 00:31:00.323 --- 10.0.0.1 ping statistics --- 00:31:00.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.323 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:00.323 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:00.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:31:00.324 00:31:00.324 --- 10.0.0.2 ping statistics --- 00:31:00.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.324 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:00.324 ' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=3289814 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 3289814 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3289814 ']' 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:00.324 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.324 [2024-11-05 16:55:06.839936] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:31:00.324 [2024-11-05 16:55:06.840003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.324 [2024-11-05 16:55:06.939823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:00.325 [2024-11-05 16:55:06.990783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.325 [2024-11-05 16:55:06.990837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.325 [2024-11-05 16:55:06.990845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.325 [2024-11-05 16:55:06.990853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.325 [2024-11-05 16:55:06.990859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.325 [2024-11-05 16:55:06.992593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.325 [2024-11-05 16:55:06.992781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.325 [2024-11-05 16:55:06.992846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.589 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:00.589 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:31:00.589 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:00.589 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:00.589 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.852 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.852 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:00.852 [2024-11-05 16:55:07.828001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.853 16:55:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:01.113 Malloc0 00:31:01.113 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.375 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:01.375 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.636 [2024-11-05 16:55:08.570225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.636 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.898 [2024-11-05 16:55:08.746682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:01.898 [2024-11-05 16:55:08.923245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3290225 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3290225 /var/tmp/bdevperf.sock 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3290225 ']' 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:01.898 16:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:02.841 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.841 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:31:02.841 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:03.412 NVMe0n1 00:31:03.412 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:03.412 00:31:03.412 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3290517 00:31:03.412 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:03.412 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:04.797 16:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.797 [2024-11-05 16:55:11.615633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 [2024-11-05 16:55:11.615884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c14e0 is same with the state(6) to be set 00:31:04.797 16:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:08.098 16:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:08.098 00:31:08.098 16:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.098 [2024-11-05 16:55:15.122360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.098 [2024-11-05 16:55:15.122505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 [2024-11-05 16:55:15.122555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2030 is same with the state(6) to be set 00:31:08.099 16:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:11.548 16:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.548 [2024-11-05 16:55:18.309204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.548 16:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:12.488 16:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:12.488 16:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3290517 00:31:19.077 { 00:31:19.077 "results": [ 00:31:19.077 { 00:31:19.077 "job": "NVMe0n1", 00:31:19.077 "core_mask": "0x1", 00:31:19.077 "workload": "verify", 00:31:19.077 "status": "finished", 00:31:19.077 "verify_range": { 00:31:19.077 "start": 0, 00:31:19.077 "length": 16384 00:31:19.077 }, 00:31:19.077 "queue_depth": 128, 00:31:19.077 "io_size": 4096, 00:31:19.077 "runtime": 15.007553, 00:31:19.077 "iops": 11123.13246536594, 00:31:19.077 "mibps": 43.4497361928357, 00:31:19.077 "io_failed": 4517, 00:31:19.077 "io_timeout": 0, 00:31:19.077 "avg_latency_us": 11176.49947832579, 00:31:19.077 "min_latency_us": 512.0, 00:31:19.077 "max_latency_us": 30801.92 00:31:19.077 } 00:31:19.077 ], 00:31:19.077 "core_count": 1 00:31:19.077 } 00:31:19.077 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3290225 00:31:19.077 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3290225 ']' 00:31:19.077 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3290225 00:31:19.077 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:31:19.077 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:19.077 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3290225 00:31:19.078 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:19.078 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:19.078 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3290225' 00:31:19.078 killing process with pid 3290225 00:31:19.078 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3290225 00:31:19.078 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3290225 00:31:19.078 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.078 [2024-11-05 16:55:09.005260] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:31:19.078 [2024-11-05 16:55:09.005319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290225 ] 00:31:19.078 [2024-11-05 16:55:09.075931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.078 [2024-11-05 16:55:09.111719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.078 Running I/O for 15 seconds... 00:31:19.078 11093.00 IOPS, 43.33 MiB/s [2024-11-05T15:55:26.141Z] [2024-11-05 16:55:11.616241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.078 [2024-11-05 16:55:11.616404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.078 [2024-11-05 16:55:11.616857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.078 [2024-11-05 16:55:11.616864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.616988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.079 [2024-11-05 16:55:11.617409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.079 [2024-11-05 16:55:11.617538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.079 [2024-11-05 16:55:11.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.080 [2024-11-05 16:55:11.617949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.617985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.617994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.618001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.618018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.618034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.618051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.080 [2024-11-05 16:55:11.618067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.080 [2024-11-05 16:55:11.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:31:19.080 [2024-11-05 16:55:11.618105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.080 [2024-11-05 16:55:11.618122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.080 [2024-11-05 16:55:11.618129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:31:19.080 [2024-11-05 16:55:11.618136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.080 [2024-11-05 16:55:11.618149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.080 [2024-11-05 16:55:11.618155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:31:19.080 [2024-11-05 16:55:11.618162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.080 [2024-11-05 16:55:11.618170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.080 [2024-11-05 16:55:11.618175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.618518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.618523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.618529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.618536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.629131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.629142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.629151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.629165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.629171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.629178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.629196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.629202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.629210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.629223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.629229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.629236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.081 [2024-11-05 16:55:11.629250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.081 [2024-11-05 16:55:11.629256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:31:19.081 [2024-11-05 16:55:11.629263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629312] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:19.081 [2024-11-05 16:55:11.629343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.081 [2024-11-05 16:55:11.629352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.081 [2024-11-05 16:55:11.629369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.081 [2024-11-05 16:55:11.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.081 [2024-11-05 16:55:11.629400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.081 [2024-11-05 16:55:11.629408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:19.081 [2024-11-05 16:55:11.629442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e2d70 (9): Bad file descriptor 00:31:19.081 [2024-11-05 16:55:11.632951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:19.081 [2024-11-05 16:55:11.671141] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:19.081 11142.00 IOPS, 43.52 MiB/s [2024-11-05T15:55:26.144Z] 11197.00 IOPS, 43.74 MiB/s [2024-11-05T15:55:26.144Z] 11179.50 IOPS, 43.67 MiB/s [2024-11-05T15:55:26.145Z] [2024-11-05 16:55:15.125775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.082 [2024-11-05 16:55:15.125951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.125968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.125986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.125995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.082 [2024-11-05 16:55:15.126366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.082 [2024-11-05 16:55:15.126373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.083 [2024-11-05 16:55:15.126963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.126984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.083 [2024-11-05 16:55:15.126992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28296 len:8 PRP1 0x0 PRP2 0x0 00:31:19.083 [2024-11-05 16:55:15.127000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.127038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.083 [2024-11-05 16:55:15.127048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.127057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.083 [2024-11-05 16:55:15.127065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.127073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.083 [2024-11-05 16:55:15.127080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.083 [2024-11-05 16:55:15.127088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.083 [2024-11-05 16:55:15.127095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2d70 is same with the state(6) to be set 00:31:19.084 [2024-11-05 16:55:15.127261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28304 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28312 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28320 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28328 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28336 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28344 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28352 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28360 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28368 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28376 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28384 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28392 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28400 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28408 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28416 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28424 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28432 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28440 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28448 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28456 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28464 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.084 [2024-11-05 16:55:15.127840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.084 [2024-11-05 16:55:15.127847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28472 len:8 PRP1 0x0 PRP2 0x0 00:31:19.084 [2024-11-05 16:55:15.127854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.084 [2024-11-05 16:55:15.127861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.127867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28480 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.127880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.127887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.127893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.127899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28488 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.127906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.127914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.127919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.127925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28496 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.127933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.127940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.127946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.127954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28504 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.127961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.127969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.127974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.127980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28512 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.127987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.127995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28520 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28528 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28536 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28544 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28552 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28560 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28568 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28576 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28584 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28592 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28600 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28608 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28616 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28624 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28632 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28640 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.128417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.128424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.128430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.128436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28648 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.138244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.138276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.138284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.085 [2024-11-05 16:55:15.138293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28656 len:8 PRP1 0x0 PRP2 0x0 00:31:19.085 [2024-11-05 16:55:15.138302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.085 [2024-11-05 16:55:15.138310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.085 [2024-11-05 16:55:15.138316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28664 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28672 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28680 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28688 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28696 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28704 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28712 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28720 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28728 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28736 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28744 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28752 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28760 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28768 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27752 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27760 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27768 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27776 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27784 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27792 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27800 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.086 [2024-11-05 16:55:15.138883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.086 [2024-11-05 16:55:15.138889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.086 [2024-11-05 16:55:15.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27808 len:8 PRP1 0x0 PRP2 0x0 00:31:19.086 [2024-11-05 16:55:15.138902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.138910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.138915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.138922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27816 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.138929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.138936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.138942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.138948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27824 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.138956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.138963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.138969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.138975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27832 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.138989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.138995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27840 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27848 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27856 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27864 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27872 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27880 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27888 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27896 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27904 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27912 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27920 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27928 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27936 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27944 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27952 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27960 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27968 len:8 PRP1 0x0 PRP2 0x0 00:31:19.087 [2024-11-05 16:55:15.139431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.087 [2024-11-05 16:55:15.139441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.087 [2024-11-05 16:55:15.139446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.087 [2024-11-05 16:55:15.139452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27976 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27984 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27992 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28000 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28008 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28016 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28024 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28032 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28040 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28048 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28056 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28064 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28072 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28080 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28088 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28096 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28104 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28112 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28120 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.139939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.139946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.139951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.139957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28128 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.147211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.147241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.147249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.147257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28136 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.147265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.147273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.147279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.147285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28144 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.147293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.147300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.147306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.088 [2024-11-05 16:55:15.147312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28152 len:8 PRP1 0x0 PRP2 0x0 00:31:19.088 [2024-11-05 16:55:15.147319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.088 [2024-11-05 16:55:15.147327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.088 [2024-11-05 16:55:15.147336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28160 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28168 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28176 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28184 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28192 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28200 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28208 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28216 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28224 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28232 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28240 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28248 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28256 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28264 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28272 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28280 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28288 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.089 [2024-11-05 16:55:15.147797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.089 [2024-11-05 16:55:15.147803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28296 len:8 PRP1 0x0 PRP2 0x0 00:31:19.089 [2024-11-05 16:55:15.147810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:15.147853] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:19.089 [2024-11-05 16:55:15.147863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:19.089 [2024-11-05 16:55:15.147916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e2d70 (9): Bad file descriptor 00:31:19.089 [2024-11-05 16:55:15.151394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:19.089 [2024-11-05 16:55:15.182531] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:19.089 11049.00 IOPS, 43.16 MiB/s [2024-11-05T15:55:26.152Z] 11068.17 IOPS, 43.24 MiB/s [2024-11-05T15:55:26.152Z] 11083.86 IOPS, 43.30 MiB/s [2024-11-05T15:55:26.152Z] 11144.75 IOPS, 43.53 MiB/s [2024-11-05T15:55:26.152Z] [2024-11-05 16:55:19.500254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.089 [2024-11-05 16:55:19.500301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:19.500319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.089 [2024-11-05 16:55:19.500327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:19.500337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.089 [2024-11-05 16:55:19.500345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:19.500355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.089 [2024-11-05 16:55:19.500362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:19.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.089 [2024-11-05 16:55:19.500379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:19.500388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.089 [2024-11-05 16:55:19.500396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.089 [2024-11-05 16:55:19.500411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.090 [2024-11-05 16:55:19.500726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.500983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.500993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.501001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.501010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.501018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.501027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.501035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.501044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.501051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.501066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.090 [2024-11-05 16:55:19.501073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.090 [2024-11-05 16:55:19.501082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.091 [2024-11-05 16:55:19.501658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.091 [2024-11-05 16:55:19.501667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.501991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.501998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.092 [2024-11-05 16:55:19.502315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.092 [2024-11-05 16:55:19.502325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.092 [2024-11-05 16:55:19.502334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.093 [2024-11-05 16:55:19.502450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.093 [2024-11-05 16:55:19.502478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.093 [2024-11-05 16:55:19.502485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35384 len:8 PRP1 0x0 PRP2 0x0 00:31:19.093 [2024-11-05 16:55:19.502492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502536] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:19.093 [2024-11-05 16:55:19.502559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.093 [2024-11-05 16:55:19.502567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.093 [2024-11-05 16:55:19.502583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.093 [2024-11-05 16:55:19.502598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.093 [2024-11-05 16:55:19.502616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.093 [2024-11-05 16:55:19.502624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:19.093 [2024-11-05 16:55:19.506172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:19.093 [2024-11-05 16:55:19.506200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e2d70 (9): Bad file descriptor 00:31:19.093 [2024-11-05 16:55:19.542724] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:19.093 11094.11 IOPS, 43.34 MiB/s [2024-11-05T15:55:26.156Z] 11114.60 IOPS, 43.42 MiB/s [2024-11-05T15:55:26.156Z] 11108.09 IOPS, 43.39 MiB/s [2024-11-05T15:55:26.156Z] 11108.25 IOPS, 43.39 MiB/s [2024-11-05T15:55:26.156Z] 11123.54 IOPS, 43.45 MiB/s [2024-11-05T15:55:26.156Z] 11122.93 IOPS, 43.45 MiB/s 00:31:19.093 Latency(us) 00:31:19.093 [2024-11-05T15:55:26.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.093 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:19.093 Verification LBA range: start 0x0 length 0x4000 00:31:19.093 NVMe0n1 : 15.01 11123.13 43.45 300.98 0.00 11176.50 512.00 30801.92 00:31:19.093 [2024-11-05T15:55:26.156Z] =================================================================================================================== 00:31:19.093 [2024-11-05T15:55:26.156Z] Total : 11123.13 43.45 300.98 0.00 11176.50 512.00 30801.92 00:31:19.093 Received shutdown signal, test time was about 15.000000 seconds 00:31:19.093 00:31:19.093 Latency(us) 00:31:19.093 [2024-11-05T15:55:26.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.093 [2024-11-05T15:55:26.156Z] =================================================================================================================== 00:31:19.093 [2024-11-05T15:55:26.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3293527 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3293527 /var/tmp/bdevperf.sock 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3293527 ']' 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:19.093 16:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:19.663 16:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:19.663 16:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:31:19.663 16:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:19.923 [2024-11-05 16:55:26.802673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:19.923 16:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:19.923 [2024-11-05 16:55:26.987111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:20.183 16:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:20.443 NVMe0n1 00:31:20.443 16:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:20.703 00:31:20.703 16:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:20.963 00:31:20.963 16:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:20.963 16:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:21.224 16:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:21.484 16:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:24.782 16:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:24.782 16:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:24.782 16:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3294588 00:31:24.782 16:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:24.782 16:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3294588 00:31:25.722 { 00:31:25.722 "results": [ 00:31:25.722 { 00:31:25.722 "job": "NVMe0n1", 00:31:25.722 "core_mask": "0x1", 00:31:25.722 "workload": "verify", 00:31:25.722 "status": "finished", 00:31:25.723 "verify_range": { 00:31:25.723 "start": 0, 00:31:25.723 "length": 16384 00:31:25.723 }, 00:31:25.723 "queue_depth": 128, 00:31:25.723 "io_size": 4096, 00:31:25.723 "runtime": 1.009701, 00:31:25.723 "iops": 11273.634471987252, 00:31:25.723 "mibps": 44.037634656200204, 00:31:25.723 "io_failed": 0, 00:31:25.723 "io_timeout": 0, 00:31:25.723 "avg_latency_us": 11298.899781545579, 00:31:25.723 "min_latency_us": 2539.52, 00:31:25.723 "max_latency_us": 10103.466666666667 00:31:25.723 } 00:31:25.723 ], 00:31:25.723 "core_count": 1 00:31:25.723 } 00:31:25.723 16:55:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:25.723 [2024-11-05 16:55:25.853003] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:31:25.723 [2024-11-05 16:55:25.853062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293527 ] 00:31:25.723 [2024-11-05 16:55:25.923825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.723 [2024-11-05 16:55:25.959116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.723 [2024-11-05 16:55:28.334445] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:25.723 [2024-11-05 16:55:28.334493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.723 [2024-11-05 16:55:28.334505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.723 [2024-11-05 16:55:28.334514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.723 [2024-11-05 16:55:28.334522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.723 [2024-11-05 16:55:28.334530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.723 [2024-11-05 16:55:28.334537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.723 [2024-11-05 16:55:28.334545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.723 [2024-11-05 16:55:28.334552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.723 [2024-11-05 16:55:28.334559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:31:25.723 [2024-11-05 16:55:28.334586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:31:25.723 [2024-11-05 16:55:28.334602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d3d70 (9): Bad file descriptor 00:31:25.723 [2024-11-05 16:55:28.355851] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:31:25.723 Running I/O for 1 seconds... 00:31:25.723 11255.00 IOPS, 43.96 MiB/s 00:31:25.723 Latency(us) 00:31:25.723 [2024-11-05T15:55:32.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:25.723 Verification LBA range: start 0x0 length 0x4000 00:31:25.723 NVMe0n1 : 1.01 11273.63 44.04 0.00 0.00 11298.90 2539.52 10103.47 00:31:25.723 [2024-11-05T15:55:32.786Z] =================================================================================================================== 00:31:25.723 [2024-11-05T15:55:32.786Z] Total : 11273.63 44.04 0.00 0.00 11298.90 2539.52 10103.47 00:31:25.723 16:55:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:25.723 16:55:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:25.983 16:55:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:25.983 16:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:25.983 16:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:26.244 16:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:26.504 16:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3293527 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3293527 ']' 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3293527 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3293527 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:29.802 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3293527' 00:31:29.802 killing process with pid 3293527 00:31:29.803 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3293527 00:31:29.803 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3293527 00:31:29.803 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:29.803 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:30.064 16:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:30.064 rmmod nvme_tcp 00:31:30.064 rmmod nvme_fabrics 00:31:30.064 rmmod nvme_keyring 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 3289814 ']' 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 3289814 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3289814 ']' 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3289814 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3289814 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3289814' 00:31:30.064 killing process with pid 3289814 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3289814 00:31:30.064 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3289814 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:30.325 16:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:31:32.241 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:32.242 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:31:32.242 00:31:32.242 real 0m40.374s 00:31:32.242 user 2m3.801s 00:31:32.242 sys 0m8.723s 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:32.504 ************************************ 00:31:32.504 END TEST nvmf_failover 00:31:32.504 ************************************ 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.504 ************************************ 00:31:32.504 START TEST nvmf_host_discovery 00:31:32.504 ************************************ 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:32.504 * Looking for test storage... 00:31:32.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:32.504 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:32.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.765 --rc genhtml_branch_coverage=1 00:31:32.765 --rc genhtml_function_coverage=1 00:31:32.765 --rc genhtml_legend=1 00:31:32.765 --rc geninfo_all_blocks=1 00:31:32.765 --rc geninfo_unexecuted_blocks=1 00:31:32.765 00:31:32.765 ' 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:32.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.765 --rc genhtml_branch_coverage=1 00:31:32.765 --rc genhtml_function_coverage=1 00:31:32.765 --rc genhtml_legend=1 00:31:32.765 --rc geninfo_all_blocks=1 00:31:32.765 --rc geninfo_unexecuted_blocks=1 00:31:32.765 00:31:32.765 ' 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:32.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.765 --rc genhtml_branch_coverage=1 00:31:32.765 --rc genhtml_function_coverage=1 00:31:32.765 --rc genhtml_legend=1 00:31:32.765 --rc geninfo_all_blocks=1 00:31:32.765 --rc geninfo_unexecuted_blocks=1 00:31:32.765 00:31:32.765 ' 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:32.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.765 --rc genhtml_branch_coverage=1 00:31:32.765 --rc genhtml_function_coverage=1 00:31:32.765 --rc genhtml_legend=1 00:31:32.765 --rc geninfo_all_blocks=1 00:31:32.765 --rc geninfo_unexecuted_blocks=1 00:31:32.765 00:31:32.765 ' 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.765 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:32.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:31:32.766 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:40.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:40.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:40.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:40.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:40.916 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:40.917 10.0.0.1 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:40.917 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:40.918 10.0.0.2 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:40.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.629 ms 00:31:40.918 00:31:40.918 --- 10.0.0.1 ping statistics --- 00:31:40.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.918 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:40.918 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:40.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:31:40.919 00:31:40.919 --- 10.0.0.2 ping statistics --- 00:31:40.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.919 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:40.919 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:40.919 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:40.919 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:40.919 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:40.919 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:40.919 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.919 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:40.920 ' 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=3299909 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 3299909 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3299909 ']' 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.920 [2024-11-05 16:55:47.146553] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:31:40.920 [2024-11-05 16:55:47.146622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.920 [2024-11-05 16:55:47.247096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.920 [2024-11-05 16:55:47.297809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.920 [2024-11-05 16:55:47.297862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.920 [2024-11-05 16:55:47.297871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.920 [2024-11-05 16:55:47.297878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.920 [2024-11-05 16:55:47.297885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.920 [2024-11-05 16:55:47.298679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.920 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.181 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.181 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:41.181 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.181 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.181 [2024-11-05 16:55:47.998470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.181 [2024-11-05 16:55:48.010738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.181 null0 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.181 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.182 null1 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3300252 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3300252 /tmp/host.sock 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3300252 ']' 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:41.182 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:41.182 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.182 [2024-11-05 16:55:48.108016] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:31:41.182 [2024-11-05 16:55:48.108080] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300252 ] 00:31:41.182 [2024-11-05 16:55:48.183061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.182 [2024-11-05 16:55:48.224619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.122 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.122 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.123 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 [2024-11-05 16:55:49.249855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:42.384 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.645 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:31:42.645 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:31:42.904 [2024-11-05 16:55:49.966646] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:42.904 [2024-11-05 16:55:49.966667] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:42.904 [2024-11-05 16:55:49.966680] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:43.164 [2024-11-05 16:55:50.054961] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:43.164 [2024-11-05 16:55:50.156991] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:43.164 [2024-11-05 16:55:50.157969] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xee4780:1 started. 00:31:43.164 [2024-11-05 16:55:50.159601] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:43.164 [2024-11-05 16:55:50.159619] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:43.164 [2024-11-05 16:55:50.165856] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xee4780 was disconnected and freed. delete nvme_qpair. 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:43.424 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.684 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:43.685 [2024-11-05 16:55:50.691880] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xef1ab0:1 started. 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:43.685 [2024-11-05 16:55:50.697161] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xef1ab0 was disconnected and freed. delete nvme_qpair. 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.685 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.946 [2024-11-05 16:55:50.793977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:43.946 [2024-11-05 16:55:50.794494] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:43.946 [2024-11-05 16:55:50.794514] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.946 [2024-11-05 16:55:50.882228] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.946 [2024-11-05 16:55:50.941982] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:31:43.946 [2024-11-05 16:55:50.942017] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:43.946 [2024-11-05 16:55:50.942025] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:43.946 [2024-11-05 16:55:50.942031] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:43.946 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:43.947 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:45.332 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.332 [2024-11-05 16:55:52.074221] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:45.332 [2024-11-05 16:55:52.074244] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.332 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:45.332 [2024-11-05 16:55:52.081710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.332 [2024-11-05 16:55:52.081731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.332 [2024-11-05 16:55:52.081740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.332 [2024-11-05 16:55:52.081752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.332 [2024-11-05 16:55:52.081760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.332 [2024-11-05 16:55:52.081767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.333 [2024-11-05 16:55:52.081776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.333 [2024-11-05 16:55:52.081783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.333 [2024-11-05 16:55:52.081791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:45.333 [2024-11-05 16:55:52.091724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.333 [2024-11-05 16:55:52.101765] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:45.333 [2024-11-05 16:55:52.101777] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:45.333 [2024-11-05 16:55:52.101782] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.101788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.333 [2024-11-05 16:55:52.101811] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.102287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.333 [2024-11-05 16:55:52.102325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4e10 with addr=10.0.0.2, port=4420 00:31:45.333 [2024-11-05 16:55:52.102336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.333 [2024-11-05 16:55:52.102360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.333 [2024-11-05 16:55:52.102387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.333 [2024-11-05 16:55:52.102395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.333 [2024-11-05 16:55:52.102405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.333 [2024-11-05 16:55:52.102413] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:45.333 [2024-11-05 16:55:52.102418] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:45.333 [2024-11-05 16:55:52.102423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.333 [2024-11-05 16:55:52.111843] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:45.333 [2024-11-05 16:55:52.111857] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:45.333 [2024-11-05 16:55:52.111862] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.111867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.333 [2024-11-05 16:55:52.111883] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.112071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.333 [2024-11-05 16:55:52.112084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4e10 with addr=10.0.0.2, port=4420 00:31:45.333 [2024-11-05 16:55:52.112092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.333 [2024-11-05 16:55:52.112103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.333 [2024-11-05 16:55:52.112114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.333 [2024-11-05 16:55:52.112121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.333 [2024-11-05 16:55:52.112128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.333 [2024-11-05 16:55:52.112134] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:45.333 [2024-11-05 16:55:52.112139] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:45.333 [2024-11-05 16:55:52.112144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.333 [2024-11-05 16:55:52.121915] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:45.333 [2024-11-05 16:55:52.121931] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:45.333 [2024-11-05 16:55:52.121936] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.121940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.333 [2024-11-05 16:55:52.121956] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.122239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.333 [2024-11-05 16:55:52.122252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4e10 with addr=10.0.0.2, port=4420 00:31:45.333 [2024-11-05 16:55:52.122264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.333 [2024-11-05 16:55:52.122276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.333 [2024-11-05 16:55:52.122287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.333 [2024-11-05 16:55:52.122294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.333 [2024-11-05 16:55:52.122301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.333 [2024-11-05 16:55:52.122308] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:45.333 [2024-11-05 16:55:52.122312] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:45.333 [2024-11-05 16:55:52.122317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.333 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:45.333 [2024-11-05 16:55:52.131987] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:45.333 [2024-11-05 16:55:52.132000] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:45.333 [2024-11-05 16:55:52.132004] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.132009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.333 [2024-11-05 16:55:52.132022] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:45.333 [2024-11-05 16:55:52.132307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.333 [2024-11-05 16:55:52.132319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4e10 with addr=10.0.0.2, port=4420 00:31:45.333 [2024-11-05 16:55:52.132326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.333 [2024-11-05 16:55:52.132337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.334 [2024-11-05 16:55:52.132348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.334 [2024-11-05 16:55:52.132354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.334 [2024-11-05 16:55:52.132361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.334 [2024-11-05 16:55:52.132367] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:45.334 [2024-11-05 16:55:52.132372] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:45.334 [2024-11-05 16:55:52.132376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:45.334 [2024-11-05 16:55:52.142053] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:45.334 [2024-11-05 16:55:52.142068] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:45.334 [2024-11-05 16:55:52.142072] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:45.334 [2024-11-05 16:55:52.142077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.334 [2024-11-05 16:55:52.142092] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:45.334 [2024-11-05 16:55:52.142376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.334 [2024-11-05 16:55:52.142389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4e10 with addr=10.0.0.2, port=4420 00:31:45.334 [2024-11-05 16:55:52.142396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.334 [2024-11-05 16:55:52.142408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.334 [2024-11-05 16:55:52.142418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.334 [2024-11-05 16:55:52.142425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.334 [2024-11-05 16:55:52.142432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.334 [2024-11-05 16:55:52.142438] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:45.334 [2024-11-05 16:55:52.142443] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:45.334 [2024-11-05 16:55:52.142447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.334 [2024-11-05 16:55:52.152123] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:45.334 [2024-11-05 16:55:52.152134] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:45.334 [2024-11-05 16:55:52.152139] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:45.334 [2024-11-05 16:55:52.152144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.334 [2024-11-05 16:55:52.152157] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:45.334 [2024-11-05 16:55:52.152439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.334 [2024-11-05 16:55:52.152450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb4e10 with addr=10.0.0.2, port=4420 00:31:45.334 [2024-11-05 16:55:52.152457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4e10 is same with the state(6) to be set 00:31:45.334 [2024-11-05 16:55:52.152468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb4e10 (9): Bad file descriptor 00:31:45.334 [2024-11-05 16:55:52.152479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:45.334 [2024-11-05 16:55:52.152491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:45.334 [2024-11-05 16:55:52.152499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:45.334 [2024-11-05 16:55:52.152505] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:45.334 [2024-11-05 16:55:52.152510] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:45.334 [2024-11-05 16:55:52.152514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:45.334 [2024-11-05 16:55:52.161869] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:45.334 [2024-11-05 16:55:52.161888] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:45.334 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.335 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.596 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.540 [2024-11-05 16:55:53.514899] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:46.540 [2024-11-05 16:55:53.514916] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:46.540 [2024-11-05 16:55:53.514929] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:46.540 [2024-11-05 16:55:53.602184] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:47.112 [2024-11-05 16:55:53.910738] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:31:47.112 [2024-11-05 16:55:53.911518] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xeefab0:1 started. 00:31:47.112 [2024-11-05 16:55:53.913339] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:47.112 [2024-11-05 16:55:53.913366] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:47.112 [2024-11-05 16:55:53.915756] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xeefab0 was disconnected and freed. delete nvme_qpair. 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:47.112 request: 00:31:47.112 { 00:31:47.112 "name": "nvme", 00:31:47.112 "trtype": "tcp", 00:31:47.112 "traddr": "10.0.0.2", 00:31:47.112 "adrfam": "ipv4", 00:31:47.112 "trsvcid": "8009", 00:31:47.112 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:47.112 "wait_for_attach": true, 00:31:47.112 "method": "bdev_nvme_start_discovery", 00:31:47.112 "req_id": 1 00:31:47.112 } 00:31:47.112 Got JSON-RPC error response 00:31:47.112 response: 00:31:47.112 { 00:31:47.112 "code": -17, 00:31:47.112 "message": "File exists" 00:31:47.112 } 00:31:47.112 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:47.113 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:47.113 request: 00:31:47.113 { 00:31:47.113 "name": "nvme_second", 00:31:47.113 "trtype": "tcp", 00:31:47.113 "traddr": "10.0.0.2", 00:31:47.113 "adrfam": "ipv4", 00:31:47.113 "trsvcid": "8009", 00:31:47.113 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:47.113 "wait_for_attach": true, 00:31:47.113 "method": "bdev_nvme_start_discovery", 00:31:47.113 "req_id": 1 00:31:47.113 } 00:31:47.113 Got JSON-RPC error response 00:31:47.113 response: 00:31:47.113 { 00:31:47.113 "code": -17, 00:31:47.113 "message": "File exists" 00:31:47.113 } 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.113 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.499 [2024-11-05 16:55:55.160778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.499 [2024-11-05 16:55:55.160808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecea30 with addr=10.0.0.2, port=8010 00:31:48.499 [2024-11-05 16:55:55.160821] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:48.499 [2024-11-05 16:55:55.160828] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:48.499 [2024-11-05 16:55:55.160835] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:49.441 [2024-11-05 16:55:56.163127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.441 [2024-11-05 16:55:56.163151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecea30 with addr=10.0.0.2, port=8010 00:31:49.441 [2024-11-05 16:55:56.163162] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:49.441 [2024-11-05 16:55:56.163168] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:49.441 [2024-11-05 16:55:56.163175] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:50.445 [2024-11-05 16:55:57.165149] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:50.445 request: 00:31:50.445 { 00:31:50.445 "name": "nvme_second", 00:31:50.445 "trtype": "tcp", 00:31:50.445 "traddr": "10.0.0.2", 00:31:50.445 "adrfam": "ipv4", 00:31:50.445 "trsvcid": "8010", 00:31:50.445 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:50.445 "wait_for_attach": false, 00:31:50.445 "attach_timeout_ms": 3000, 00:31:50.445 "method": "bdev_nvme_start_discovery", 00:31:50.445 "req_id": 1 00:31:50.445 } 00:31:50.445 Got JSON-RPC error response 00:31:50.445 response: 00:31:50.445 { 00:31:50.445 "code": -110, 00:31:50.445 "message": "Connection timed out" 00:31:50.445 } 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3300252 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:50.445 rmmod nvme_tcp 00:31:50.445 rmmod nvme_fabrics 00:31:50.445 rmmod nvme_keyring 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 3299909 ']' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 3299909 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3299909 ']' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3299909 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3299909 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3299909' 00:31:50.445 killing process with pid 3299909 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3299909 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3299909 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:50.445 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:53.019 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:53.019 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:53.019 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:31:53.020 00:31:53.020 real 0m20.164s 00:31:53.020 user 0m23.504s 00:31:53.020 sys 0m7.008s 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.020 ************************************ 00:31:53.020 END TEST nvmf_host_discovery 00:31:53.020 ************************************ 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.020 ************************************ 00:31:53.020 START TEST nvmf_host_multipath_status 00:31:53.020 ************************************ 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:53.020 * Looking for test storage... 00:31:53.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:53.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.020 --rc genhtml_branch_coverage=1 00:31:53.020 --rc genhtml_function_coverage=1 00:31:53.020 --rc genhtml_legend=1 00:31:53.020 --rc geninfo_all_blocks=1 00:31:53.020 --rc geninfo_unexecuted_blocks=1 00:31:53.020 00:31:53.020 ' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:53.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.020 --rc genhtml_branch_coverage=1 00:31:53.020 --rc genhtml_function_coverage=1 00:31:53.020 --rc genhtml_legend=1 00:31:53.020 --rc geninfo_all_blocks=1 00:31:53.020 --rc geninfo_unexecuted_blocks=1 00:31:53.020 00:31:53.020 ' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:53.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.020 --rc genhtml_branch_coverage=1 00:31:53.020 --rc genhtml_function_coverage=1 00:31:53.020 --rc genhtml_legend=1 00:31:53.020 --rc geninfo_all_blocks=1 00:31:53.020 --rc geninfo_unexecuted_blocks=1 00:31:53.020 00:31:53.020 ' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:53.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.020 --rc genhtml_branch_coverage=1 00:31:53.020 --rc genhtml_function_coverage=1 00:31:53.020 --rc genhtml_legend=1 00:31:53.020 --rc geninfo_all_blocks=1 00:31:53.020 --rc geninfo_unexecuted_blocks=1 00:31:53.020 00:31:53.020 ' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.020 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:53.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:31:53.021 16:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:01.170 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:01.170 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:01.170 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:01.170 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:01.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@247 -- # create_target_ns 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:01.171 16:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:01.171 10.0.0.1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:01.171 10.0.0.2 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:01.171 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:01.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.493 ms 00:32:01.172 00:32:01.172 --- 10.0.0.1 ping statistics --- 00:32:01.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.172 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:01.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:32:01.172 00:32:01.172 --- 10.0.0.2 ping statistics --- 00:32:01.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.172 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:32:01.172 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:32:01.173 ' 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=3306396 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 3306396 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3306396 ']' 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:01.173 16:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:01.173 [2024-11-05 16:56:07.501487] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:32:01.173 [2024-11-05 16:56:07.501556] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.173 [2024-11-05 16:56:07.583526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:01.173 [2024-11-05 16:56:07.625358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.173 [2024-11-05 16:56:07.625394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.173 [2024-11-05 16:56:07.625402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.173 [2024-11-05 16:56:07.625410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.173 [2024-11-05 16:56:07.625416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.173 [2024-11-05 16:56:07.626734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.173 [2024-11-05 16:56:07.626738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3306396 00:32:01.434 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:01.696 [2024-11-05 16:56:08.517398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.696 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:01.696 Malloc0 00:32:01.696 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:01.957 16:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:02.218 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.218 [2024-11-05 16:56:09.206026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.218 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:02.478 [2024-11-05 16:56:09.358381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:02.478 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3306809 00:32:02.478 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:02.478 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3306809 /var/tmp/bdevperf.sock 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3306809 ']' 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:02.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:02.479 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:02.740 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:02.740 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:32:02.740 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:02.740 16:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:03.314 Nvme0n1 00:32:03.314 16:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:03.576 Nvme0n1 00:32:03.576 16:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:03.576 16:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:06.121 16:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:06.121 16:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:06.121 16:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:06.121 16:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:07.063 16:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:07.063 16:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:07.063 16:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.063 16:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.324 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.586 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.586 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.586 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.586 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.846 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.846 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:07.846 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.846 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.846 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.106 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:08.106 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.106 16:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:08.106 16:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.106 16:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:08.106 16:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:08.367 16:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:08.627 16:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:09.569 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:09.569 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:09.569 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.569 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.829 16:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.089 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.089 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.089 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.089 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.350 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.350 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.351 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.351 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:10.351 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.610 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:10.610 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.610 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:10.610 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.610 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:10.611 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:10.871 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:10.871 16:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:12.253 16:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:12.254 16:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:12.254 16:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.254 16:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.254 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:12.514 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.514 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:12.514 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.514 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:12.781 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.781 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:12.781 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.781 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:13.044 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.044 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:13.044 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.044 16:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:13.044 16:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.044 16:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:13.044 16:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:13.305 16:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:13.565 16:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:14.506 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:14.506 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:14.506 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.506 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.766 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:15.026 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.026 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:15.027 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.027 16:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.288 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.549 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:15.549 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:15.549 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:15.810 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:16.070 16:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:17.013 16:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:17.013 16:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:17.013 16:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.013 16:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.274 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.535 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.535 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.535 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.535 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.797 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.059 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:18.059 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:18.059 16:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:18.320 16:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:18.320 16:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:19.262 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:19.262 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:19.522 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.522 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:19.522 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:19.522 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:19.522 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.522 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:19.783 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.783 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:19.783 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.783 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:20.043 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.043 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:20.043 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.043 16:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.043 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.043 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:20.043 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.043 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:20.304 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:20.304 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:20.304 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.304 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:20.565 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.565 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:20.826 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:20.826 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:20.826 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:21.086 16:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:22.028 16:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:22.028 16:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:22.028 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.028 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:22.287 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.287 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:22.287 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.287 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.548 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:22.807 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.807 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:22.807 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.807 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:23.067 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.067 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:23.067 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.067 16:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:23.067 16:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.067 16:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:23.067 16:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:23.327 16:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:23.588 16:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:24.529 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:24.529 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:24.529 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.530 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.791 16:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:25.051 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.051 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:25.051 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.051 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.312 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:25.573 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.573 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:25.573 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:25.833 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:26.093 16:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:27.038 16:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:27.038 16:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:27.038 16:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.039 16:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.299 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:27.560 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.560 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:27.560 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.560 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.820 16:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:28.080 16:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:28.080 16:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:28.080 16:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:28.341 16:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:28.341 16:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.723 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:29.983 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.983 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:29.983 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.983 16:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.243 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3306809 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3306809 ']' 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3306809 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3306809 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3306809' 00:32:30.503 killing process with pid 3306809 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3306809 00:32:30.503 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3306809 00:32:30.503 { 00:32:30.503 "results": [ 00:32:30.503 { 00:32:30.503 "job": "Nvme0n1", 00:32:30.503 "core_mask": "0x4", 00:32:30.503 "workload": "verify", 00:32:30.503 "status": "terminated", 00:32:30.503 "verify_range": { 00:32:30.503 "start": 0, 00:32:30.503 "length": 16384 00:32:30.503 }, 00:32:30.503 "queue_depth": 128, 00:32:30.503 "io_size": 4096, 00:32:30.503 "runtime": 26.817929, 00:32:30.503 "iops": 10829.69531316158, 00:32:30.503 "mibps": 42.30349731703742, 00:32:30.503 "io_failed": 0, 00:32:30.503 "io_timeout": 0, 00:32:30.503 "avg_latency_us": 11802.0786370554, 00:32:30.503 "min_latency_us": 317.44, 00:32:30.503 "max_latency_us": 3019898.88 00:32:30.503 } 00:32:30.503 ], 00:32:30.503 "core_count": 1 00:32:30.503 } 00:32:30.767 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3306809 00:32:30.767 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:30.767 [2024-11-05 16:56:09.423139] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:32:30.767 [2024-11-05 16:56:09.423199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3306809 ] 00:32:30.767 [2024-11-05 16:56:09.482071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.767 [2024-11-05 16:56:09.510683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.767 Running I/O for 90 seconds... 00:32:30.767 9614.00 IOPS, 37.55 MiB/s [2024-11-05T15:56:37.830Z] 9684.00 IOPS, 37.83 MiB/s [2024-11-05T15:56:37.830Z] 9662.00 IOPS, 37.74 MiB/s [2024-11-05T15:56:37.830Z] 9686.50 IOPS, 37.84 MiB/s [2024-11-05T15:56:37.830Z] 9914.00 IOPS, 38.73 MiB/s [2024-11-05T15:56:37.830Z] 10451.00 IOPS, 40.82 MiB/s [2024-11-05T15:56:37.830Z] 10786.43 IOPS, 42.13 MiB/s [2024-11-05T15:56:37.830Z] 10753.88 IOPS, 42.01 MiB/s [2024-11-05T15:56:37.830Z] 10627.67 IOPS, 41.51 MiB/s [2024-11-05T15:56:37.830Z] 10530.80 IOPS, 41.14 MiB/s [2024-11-05T15:56:37.830Z] 10455.36 IOPS, 40.84 MiB/s [2024-11-05T15:56:37.830Z] [2024-11-05 16:56:22.689369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:30.767 [2024-11-05 16:56:22.689903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.767 [2024-11-05 16:56:22.689909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.689919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.689924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.689935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.689940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.689952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.689957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.689969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.689985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.689990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.690005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.690021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.690036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.768 [2024-11-05 16:56:22.690052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.690985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.690993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:30.768 [2024-11-05 16:56:22.691430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.768 [2024-11-05 16:56:22.691435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.691988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.691993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:30.769 [2024-11-05 16:56:22.692268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.769 [2024-11-05 16:56:22.692274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:22.692659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:22.692664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.770 10377.42 IOPS, 40.54 MiB/s [2024-11-05T15:56:37.833Z] 9579.15 IOPS, 37.42 MiB/s [2024-11-05T15:56:37.833Z] 8894.93 IOPS, 34.75 MiB/s [2024-11-05T15:56:37.833Z] 8320.53 IOPS, 32.50 MiB/s [2024-11-05T15:56:37.833Z] 8621.38 IOPS, 33.68 MiB/s [2024-11-05T15:56:37.833Z] 8863.41 IOPS, 34.62 MiB/s [2024-11-05T15:56:37.833Z] 9287.83 IOPS, 36.28 MiB/s [2024-11-05T15:56:37.833Z] 9694.11 IOPS, 37.87 MiB/s [2024-11-05T15:56:37.833Z] 9976.30 IOPS, 38.97 MiB/s [2024-11-05T15:56:37.833Z] 10109.00 IOPS, 39.49 MiB/s [2024-11-05T15:56:37.833Z] 10249.00 IOPS, 40.04 MiB/s [2024-11-05T15:56:37.833Z] 10506.22 IOPS, 41.04 MiB/s [2024-11-05T15:56:37.833Z] 10776.54 IOPS, 42.10 MiB/s [2024-11-05T15:56:37.833Z] [2024-11-05 16:56:35.369883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:35.369920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.369953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:35.369960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.369971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.770 [2024-11-05 16:56:35.369976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.369991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.369996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:30.770 [2024-11-05 16:56:35.370743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.770 [2024-11-05 16:56:35.370754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:30.770 10912.96 IOPS, 42.63 MiB/s [2024-11-05T15:56:37.833Z] 10868.38 IOPS, 42.45 MiB/s [2024-11-05T15:56:37.833Z] Received shutdown signal, test time was about 26.818539 seconds 00:32:30.770 00:32:30.770 Latency(us) 00:32:30.770 [2024-11-05T15:56:37.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.770 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:30.770 Verification LBA range: start 0x0 length 0x4000 00:32:30.770 Nvme0n1 : 26.82 10829.70 42.30 0.00 0.00 11802.08 317.44 3019898.88 00:32:30.770 [2024-11-05T15:56:37.833Z] =================================================================================================================== 00:32:30.770 [2024-11-05T15:56:37.834Z] Total : 10829.70 42.30 0.00 0.00 11802.08 317.44 3019898.88 00:32:30.771 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:31.031 rmmod nvme_tcp 00:32:31.031 rmmod nvme_fabrics 00:32:31.031 rmmod nvme_keyring 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 3306396 ']' 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 3306396 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3306396 ']' 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3306396 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:32:31.031 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:31.032 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3306396 00:32:31.032 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:31.032 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:31.032 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3306396' 00:32:31.032 killing process with pid 3306396 00:32:31.032 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3306396 00:32:31.032 16:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3306396 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:31.292 16:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:32:33.203 00:32:33.203 real 0m40.580s 00:32:33.203 user 1m44.150s 00:32:33.203 sys 0m11.713s 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:33.203 ************************************ 00:32:33.203 END TEST nvmf_host_multipath_status 00:32:33.203 ************************************ 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:33.203 16:56:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.463 ************************************ 00:32:33.463 START TEST nvmf_discovery_remove_ifc 00:32:33.463 ************************************ 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:33.463 * Looking for test storage... 00:32:33.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:33.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.463 --rc genhtml_branch_coverage=1 00:32:33.463 --rc genhtml_function_coverage=1 00:32:33.463 --rc genhtml_legend=1 00:32:33.463 --rc geninfo_all_blocks=1 00:32:33.463 --rc geninfo_unexecuted_blocks=1 00:32:33.463 00:32:33.463 ' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:33.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.463 --rc genhtml_branch_coverage=1 00:32:33.463 --rc genhtml_function_coverage=1 00:32:33.463 --rc genhtml_legend=1 00:32:33.463 --rc geninfo_all_blocks=1 00:32:33.463 --rc geninfo_unexecuted_blocks=1 00:32:33.463 00:32:33.463 ' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:33.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.463 --rc genhtml_branch_coverage=1 00:32:33.463 --rc genhtml_function_coverage=1 00:32:33.463 --rc genhtml_legend=1 00:32:33.463 --rc geninfo_all_blocks=1 00:32:33.463 --rc geninfo_unexecuted_blocks=1 00:32:33.463 00:32:33.463 ' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:33.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.463 --rc genhtml_branch_coverage=1 00:32:33.463 --rc genhtml_function_coverage=1 00:32:33.463 --rc genhtml_legend=1 00:32:33.463 --rc geninfo_all_blocks=1 00:32:33.463 --rc geninfo_unexecuted_blocks=1 00:32:33.463 00:32:33.463 ' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.463 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.724 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:33.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:32:33.725 16:56:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.862 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.862 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:32:41.862 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:41.863 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:41.863 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:41.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:41.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@247 -- # create_target_ns 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:41.863 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:41.864 10.0.0.1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:41.864 10.0.0.2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:41.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.606 ms 00:32:41.864 00:32:41.864 --- 10.0.0.1 ping statistics --- 00:32:41.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.864 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:41.864 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:41.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:32:41.864 00:32:41.864 --- 10.0.0.2 ping statistics --- 00:32:41.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.865 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:32:41.865 ' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=3316663 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 3316663 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3316663 ']' 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:41.865 16:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.865 [2024-11-05 16:56:48.029365] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:32:41.865 [2024-11-05 16:56:48.029435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.865 [2024-11-05 16:56:48.128946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.865 [2024-11-05 16:56:48.178439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.865 [2024-11-05 16:56:48.178492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.865 [2024-11-05 16:56:48.178500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.865 [2024-11-05 16:56:48.178508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.865 [2024-11-05 16:56:48.178514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.865 [2024-11-05 16:56:48.179270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.865 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:41.866 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:32:41.866 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:41.866 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:41.866 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:42.127 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.127 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:42.127 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.127 16:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:42.127 [2024-11-05 16:56:48.964669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.127 [2024-11-05 16:56:48.972939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:42.127 null0 00:32:42.127 [2024-11-05 16:56:49.004881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3316763 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3316763 /tmp/host.sock 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3316763 ']' 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:42.127 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:42.127 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:42.127 [2024-11-05 16:56:49.082896] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:32:42.127 [2024-11-05 16:56:49.082962] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316763 ] 00:32:42.127 [2024-11-05 16:56:49.158213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.424 [2024-11-05 16:56:49.200551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.096 16:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:44.038 [2024-11-05 16:56:51.012951] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:44.039 [2024-11-05 16:56:51.012976] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:44.039 [2024-11-05 16:56:51.012990] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:44.039 [2024-11-05 16:56:51.101297] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:44.300 [2024-11-05 16:56:51.162006] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:44.300 [2024-11-05 16:56:51.163158] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d023f0:1 started. 00:32:44.300 [2024-11-05 16:56:51.164767] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:44.300 [2024-11-05 16:56:51.164812] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:44.300 [2024-11-05 16:56:51.164832] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:44.300 [2024-11-05 16:56:51.164846] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:44.300 [2024-11-05 16:56:51.164869] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:44.300 [2024-11-05 16:56:51.172415] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d023f0 was disconnected and freed. delete nvme_qpair. 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:44.300 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.561 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:44.561 16:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:45.501 16:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:46.442 16:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:47.828 16:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:48.772 16:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:49.713 [2024-11-05 16:56:56.605338] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:49.713 [2024-11-05 16:56:56.605381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.713 [2024-11-05 16:56:56.605399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.713 [2024-11-05 16:56:56.605410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.713 [2024-11-05 16:56:56.605418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.713 [2024-11-05 16:56:56.605426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.713 [2024-11-05 16:56:56.605433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.713 [2024-11-05 16:56:56.605442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.713 [2024-11-05 16:56:56.605450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.713 [2024-11-05 16:56:56.605458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.713 [2024-11-05 16:56:56.605465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.713 [2024-11-05 16:56:56.605473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdec00 is same with the state(6) to be set 00:32:49.713 [2024-11-05 16:56:56.615361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdec00 (9): Bad file descriptor 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.713 [2024-11-05 16:56:56.625399] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:49.713 [2024-11-05 16:56:56.625411] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:49.713 [2024-11-05 16:56:56.625416] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:49.713 [2024-11-05 16:56:56.625422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:49.713 [2024-11-05 16:56:56.625443] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:49.713 16:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:50.655 [2024-11-05 16:56:57.652783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:50.655 [2024-11-05 16:56:57.652822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdec00 with addr=10.0.0.2, port=4420 00:32:50.655 [2024-11-05 16:56:57.652833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdec00 is same with the state(6) to be set 00:32:50.655 [2024-11-05 16:56:57.652856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdec00 (9): Bad file descriptor 00:32:50.655 [2024-11-05 16:56:57.652898] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:32:50.655 [2024-11-05 16:56:57.652918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:50.655 [2024-11-05 16:56:57.652926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:50.655 [2024-11-05 16:56:57.652936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:50.655 [2024-11-05 16:56:57.652948] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:50.655 [2024-11-05 16:56:57.652954] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:50.655 [2024-11-05 16:56:57.652959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.655 [2024-11-05 16:56:57.652967] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:50.655 [2024-11-05 16:56:57.652972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:50.655 16:56:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:51.600 [2024-11-05 16:56:58.655346] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:51.600 [2024-11-05 16:56:58.655367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:51.600 [2024-11-05 16:56:58.655379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:51.600 [2024-11-05 16:56:58.655386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:51.600 [2024-11-05 16:56:58.655393] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:32:51.600 [2024-11-05 16:56:58.655401] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:51.600 [2024-11-05 16:56:58.655406] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:51.600 [2024-11-05 16:56:58.655410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:51.600 [2024-11-05 16:56:58.655432] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:51.600 [2024-11-05 16:56:58.655453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:51.600 [2024-11-05 16:56:58.655463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:51.600 [2024-11-05 16:56:58.655473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:51.600 [2024-11-05 16:56:58.655480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:51.600 [2024-11-05 16:56:58.655489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:51.600 [2024-11-05 16:56:58.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:51.600 [2024-11-05 16:56:58.655504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:51.600 [2024-11-05 16:56:58.655515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:51.600 [2024-11-05 16:56:58.655524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:51.600 [2024-11-05 16:56:58.655531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:51.600 [2024-11-05 16:56:58.655539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:32:51.600 [2024-11-05 16:56:58.655563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cce340 (9): Bad file descriptor 00:32:51.600 [2024-11-05 16:56:58.656565] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:51.600 [2024-11-05 16:56:58.656576] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:32:51.866 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:51.866 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.866 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:51.866 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:51.867 16:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:53.255 16:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:53.826 [2024-11-05 16:57:00.707958] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:53.826 [2024-11-05 16:57:00.707977] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:53.826 [2024-11-05 16:57:00.707991] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:53.826 [2024-11-05 16:57:00.794259] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:54.087 16:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.087 16:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:54.087 16:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:54.087 [2024-11-05 16:57:01.017502] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:32:54.087 [2024-11-05 16:57:01.018349] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1d11150:1 started. 00:32:54.087 [2024-11-05 16:57:01.019608] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:54.087 [2024-11-05 16:57:01.019644] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:54.087 [2024-11-05 16:57:01.019664] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:54.087 [2024-11-05 16:57:01.019679] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:54.087 [2024-11-05 16:57:01.019687] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:54.087 [2024-11-05 16:57:01.026652] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1d11150 was disconnected and freed. delete nvme_qpair. 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3316763 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3316763 ']' 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3316763 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:55.030 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3316763 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3316763' 00:32:55.291 killing process with pid 3316763 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3316763 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3316763 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:55.291 rmmod nvme_tcp 00:32:55.291 rmmod nvme_fabrics 00:32:55.291 rmmod nvme_keyring 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 3316663 ']' 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 3316663 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3316663 ']' 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3316663 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:55.291 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3316663 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3316663' 00:32:55.551 killing process with pid 3316663 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3316663 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3316663 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:55.551 16:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:32:58.093 00:32:58.093 real 0m24.282s 00:32:58.093 user 0m29.428s 00:32:58.093 sys 0m7.027s 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:58.093 ************************************ 00:32:58.093 END TEST nvmf_discovery_remove_ifc 00:32:58.093 ************************************ 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.093 ************************************ 00:32:58.093 START TEST nvmf_identify_kernel_target 00:32:58.093 ************************************ 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:58.093 * Looking for test storage... 00:32:58.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.093 --rc genhtml_branch_coverage=1 00:32:58.093 --rc genhtml_function_coverage=1 00:32:58.093 --rc genhtml_legend=1 00:32:58.093 --rc geninfo_all_blocks=1 00:32:58.093 --rc geninfo_unexecuted_blocks=1 00:32:58.093 00:32:58.093 ' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.093 --rc genhtml_branch_coverage=1 00:32:58.093 --rc genhtml_function_coverage=1 00:32:58.093 --rc genhtml_legend=1 00:32:58.093 --rc geninfo_all_blocks=1 00:32:58.093 --rc geninfo_unexecuted_blocks=1 00:32:58.093 00:32:58.093 ' 00:32:58.093 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.093 --rc genhtml_branch_coverage=1 00:32:58.093 --rc genhtml_function_coverage=1 00:32:58.093 --rc genhtml_legend=1 00:32:58.093 --rc geninfo_all_blocks=1 00:32:58.094 --rc geninfo_unexecuted_blocks=1 00:32:58.094 00:32:58.094 ' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:58.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.094 --rc genhtml_branch_coverage=1 00:32:58.094 --rc genhtml_function_coverage=1 00:32:58.094 --rc genhtml_legend=1 00:32:58.094 --rc geninfo_all_blocks=1 00:32:58.094 --rc geninfo_unexecuted_blocks=1 00:32:58.094 00:32:58.094 ' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:58.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:32:58.094 16:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:33:06.237 16:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:06.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:06.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:06.237 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:06.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@247 -- # create_target_ns 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:06.237 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:06.238 10.0.0.1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:06.238 10.0.0.2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:06.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.592 ms 00:33:06.238 00:33:06.238 --- 10.0.0.1 ping statistics --- 00:33:06.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.238 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:06.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:33:06.238 00:33:06.238 --- 10.0.0.2 ping statistics --- 00:33:06.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.238 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.238 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:06.239 ' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:06.239 16:57:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:08.786 Waiting for block devices as requested 00:33:08.786 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:09.047 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:09.047 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:09.047 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:09.047 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:09.307 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:09.307 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:09.307 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:09.567 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:09.567 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:09.828 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:09.828 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:09.828 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:09.828 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:10.088 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:10.088 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:10.088 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:10.348 No valid GPT data, bailing 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.348 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:10.610 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:10.610 00:33:10.610 Discovery Log Number of Records 2, Generation counter 2 00:33:10.610 =====Discovery Log Entry 0====== 00:33:10.610 trtype: tcp 00:33:10.610 adrfam: ipv4 00:33:10.610 subtype: current discovery subsystem 00:33:10.610 treq: not specified, sq flow control disable supported 00:33:10.610 portid: 1 00:33:10.610 trsvcid: 4420 00:33:10.610 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:10.610 traddr: 10.0.0.1 00:33:10.610 eflags: none 00:33:10.610 sectype: none 00:33:10.610 =====Discovery Log Entry 1====== 00:33:10.610 trtype: tcp 00:33:10.610 adrfam: ipv4 00:33:10.610 subtype: nvme subsystem 00:33:10.610 treq: not specified, sq flow control disable supported 00:33:10.610 portid: 1 00:33:10.610 trsvcid: 4420 00:33:10.610 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:10.611 traddr: 10.0.0.1 00:33:10.611 eflags: none 00:33:10.611 sectype: none 00:33:10.611 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:10.611 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:10.611 ===================================================== 00:33:10.611 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:10.611 ===================================================== 00:33:10.611 Controller Capabilities/Features 00:33:10.611 ================================ 00:33:10.611 Vendor ID: 0000 00:33:10.611 Subsystem Vendor ID: 0000 00:33:10.611 Serial Number: 688f55f77ced3b8c2cc1 00:33:10.611 Model Number: Linux 00:33:10.611 Firmware Version: 6.8.9-20 00:33:10.611 Recommended Arb Burst: 0 00:33:10.611 IEEE OUI Identifier: 00 00 00 00:33:10.611 Multi-path I/O 00:33:10.611 May have multiple subsystem ports: No 00:33:10.611 May have multiple controllers: No 00:33:10.611 Associated with SR-IOV VF: No 00:33:10.611 Max Data Transfer Size: Unlimited 00:33:10.611 Max Number of Namespaces: 0 00:33:10.611 Max Number of I/O Queues: 1024 00:33:10.611 NVMe Specification Version (VS): 1.3 00:33:10.611 NVMe Specification Version (Identify): 1.3 00:33:10.611 Maximum Queue Entries: 1024 00:33:10.611 Contiguous Queues Required: No 00:33:10.611 Arbitration Mechanisms Supported 00:33:10.611 Weighted Round Robin: Not Supported 00:33:10.611 Vendor Specific: Not Supported 00:33:10.611 Reset Timeout: 7500 ms 00:33:10.611 Doorbell Stride: 4 bytes 00:33:10.611 NVM Subsystem Reset: Not Supported 00:33:10.611 Command Sets Supported 00:33:10.611 NVM Command Set: Supported 00:33:10.611 Boot Partition: Not Supported 00:33:10.611 Memory Page Size Minimum: 4096 bytes 00:33:10.611 Memory Page Size Maximum: 4096 bytes 00:33:10.611 Persistent Memory Region: Not Supported 00:33:10.611 Optional Asynchronous Events Supported 00:33:10.611 Namespace Attribute Notices: Not Supported 00:33:10.611 Firmware Activation Notices: Not Supported 00:33:10.611 ANA Change Notices: Not Supported 00:33:10.611 PLE Aggregate Log Change Notices: Not Supported 00:33:10.611 LBA Status Info Alert Notices: Not Supported 00:33:10.611 EGE Aggregate Log Change Notices: Not Supported 00:33:10.611 Normal NVM Subsystem Shutdown event: Not Supported 00:33:10.611 Zone Descriptor Change Notices: Not Supported 00:33:10.611 Discovery Log Change Notices: Supported 00:33:10.611 Controller Attributes 00:33:10.611 128-bit Host Identifier: Not Supported 00:33:10.611 Non-Operational Permissive Mode: Not Supported 00:33:10.611 NVM Sets: Not Supported 00:33:10.611 Read Recovery Levels: Not Supported 00:33:10.611 Endurance Groups: Not Supported 00:33:10.611 Predictable Latency Mode: Not Supported 00:33:10.611 Traffic Based Keep ALive: Not Supported 00:33:10.611 Namespace Granularity: Not Supported 00:33:10.611 SQ Associations: Not Supported 00:33:10.611 UUID List: Not Supported 00:33:10.611 Multi-Domain Subsystem: Not Supported 00:33:10.611 Fixed Capacity Management: Not Supported 00:33:10.611 Variable Capacity Management: Not Supported 00:33:10.611 Delete Endurance Group: Not Supported 00:33:10.611 Delete NVM Set: Not Supported 00:33:10.611 Extended LBA Formats Supported: Not Supported 00:33:10.611 Flexible Data Placement Supported: Not Supported 00:33:10.611 00:33:10.611 Controller Memory Buffer Support 00:33:10.611 ================================ 00:33:10.611 Supported: No 00:33:10.611 00:33:10.611 Persistent Memory Region Support 00:33:10.611 ================================ 00:33:10.611 Supported: No 00:33:10.611 00:33:10.611 Admin Command Set Attributes 00:33:10.611 ============================ 00:33:10.611 Security Send/Receive: Not Supported 00:33:10.611 Format NVM: Not Supported 00:33:10.611 Firmware Activate/Download: Not Supported 00:33:10.611 Namespace Management: Not Supported 00:33:10.611 Device Self-Test: Not Supported 00:33:10.611 Directives: Not Supported 00:33:10.611 NVMe-MI: Not Supported 00:33:10.611 Virtualization Management: Not Supported 00:33:10.611 Doorbell Buffer Config: Not Supported 00:33:10.611 Get LBA Status Capability: Not Supported 00:33:10.611 Command & Feature Lockdown Capability: Not Supported 00:33:10.611 Abort Command Limit: 1 00:33:10.611 Async Event Request Limit: 1 00:33:10.611 Number of Firmware Slots: N/A 00:33:10.611 Firmware Slot 1 Read-Only: N/A 00:33:10.611 Firmware Activation Without Reset: N/A 00:33:10.611 Multiple Update Detection Support: N/A 00:33:10.611 Firmware Update Granularity: No Information Provided 00:33:10.611 Per-Namespace SMART Log: No 00:33:10.611 Asymmetric Namespace Access Log Page: Not Supported 00:33:10.611 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:10.611 Command Effects Log Page: Not Supported 00:33:10.611 Get Log Page Extended Data: Supported 00:33:10.611 Telemetry Log Pages: Not Supported 00:33:10.611 Persistent Event Log Pages: Not Supported 00:33:10.611 Supported Log Pages Log Page: May Support 00:33:10.611 Commands Supported & Effects Log Page: Not Supported 00:33:10.611 Feature Identifiers & Effects Log Page:May Support 00:33:10.611 NVMe-MI Commands & Effects Log Page: May Support 00:33:10.611 Data Area 4 for Telemetry Log: Not Supported 00:33:10.611 Error Log Page Entries Supported: 1 00:33:10.611 Keep Alive: Not Supported 00:33:10.611 00:33:10.611 NVM Command Set Attributes 00:33:10.611 ========================== 00:33:10.611 Submission Queue Entry Size 00:33:10.611 Max: 1 00:33:10.611 Min: 1 00:33:10.611 Completion Queue Entry Size 00:33:10.611 Max: 1 00:33:10.611 Min: 1 00:33:10.611 Number of Namespaces: 0 00:33:10.611 Compare Command: Not Supported 00:33:10.611 Write Uncorrectable Command: Not Supported 00:33:10.611 Dataset Management Command: Not Supported 00:33:10.611 Write Zeroes Command: Not Supported 00:33:10.611 Set Features Save Field: Not Supported 00:33:10.611 Reservations: Not Supported 00:33:10.611 Timestamp: Not Supported 00:33:10.611 Copy: Not Supported 00:33:10.611 Volatile Write Cache: Not Present 00:33:10.611 Atomic Write Unit (Normal): 1 00:33:10.611 Atomic Write Unit (PFail): 1 00:33:10.611 Atomic Compare & Write Unit: 1 00:33:10.611 Fused Compare & Write: Not Supported 00:33:10.611 Scatter-Gather List 00:33:10.611 SGL Command Set: Supported 00:33:10.611 SGL Keyed: Not Supported 00:33:10.611 SGL Bit Bucket Descriptor: Not Supported 00:33:10.611 SGL Metadata Pointer: Not Supported 00:33:10.611 Oversized SGL: Not Supported 00:33:10.611 SGL Metadata Address: Not Supported 00:33:10.611 SGL Offset: Supported 00:33:10.611 Transport SGL Data Block: Not Supported 00:33:10.611 Replay Protected Memory Block: Not Supported 00:33:10.611 00:33:10.611 Firmware Slot Information 00:33:10.611 ========================= 00:33:10.611 Active slot: 0 00:33:10.611 00:33:10.611 00:33:10.611 Error Log 00:33:10.611 ========= 00:33:10.611 00:33:10.611 Active Namespaces 00:33:10.611 ================= 00:33:10.611 Discovery Log Page 00:33:10.611 ================== 00:33:10.611 Generation Counter: 2 00:33:10.611 Number of Records: 2 00:33:10.611 Record Format: 0 00:33:10.611 00:33:10.611 Discovery Log Entry 0 00:33:10.611 ---------------------- 00:33:10.611 Transport Type: 3 (TCP) 00:33:10.611 Address Family: 1 (IPv4) 00:33:10.611 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:10.611 Entry Flags: 00:33:10.611 Duplicate Returned Information: 0 00:33:10.611 Explicit Persistent Connection Support for Discovery: 0 00:33:10.611 Transport Requirements: 00:33:10.611 Secure Channel: Not Specified 00:33:10.611 Port ID: 1 (0x0001) 00:33:10.611 Controller ID: 65535 (0xffff) 00:33:10.611 Admin Max SQ Size: 32 00:33:10.611 Transport Service Identifier: 4420 00:33:10.611 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:10.611 Transport Address: 10.0.0.1 00:33:10.611 Discovery Log Entry 1 00:33:10.611 ---------------------- 00:33:10.611 Transport Type: 3 (TCP) 00:33:10.611 Address Family: 1 (IPv4) 00:33:10.611 Subsystem Type: 2 (NVM Subsystem) 00:33:10.611 Entry Flags: 00:33:10.611 Duplicate Returned Information: 0 00:33:10.611 Explicit Persistent Connection Support for Discovery: 0 00:33:10.611 Transport Requirements: 00:33:10.611 Secure Channel: Not Specified 00:33:10.611 Port ID: 1 (0x0001) 00:33:10.611 Controller ID: 65535 (0xffff) 00:33:10.611 Admin Max SQ Size: 32 00:33:10.611 Transport Service Identifier: 4420 00:33:10.611 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:10.611 Transport Address: 10.0.0.1 00:33:10.611 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.873 get_feature(0x01) failed 00:33:10.873 get_feature(0x02) failed 00:33:10.873 get_feature(0x04) failed 00:33:10.873 ===================================================== 00:33:10.873 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.873 ===================================================== 00:33:10.873 Controller Capabilities/Features 00:33:10.873 ================================ 00:33:10.873 Vendor ID: 0000 00:33:10.873 Subsystem Vendor ID: 0000 00:33:10.873 Serial Number: 0fe3dd5f63308e41808d 00:33:10.873 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:10.873 Firmware Version: 6.8.9-20 00:33:10.873 Recommended Arb Burst: 6 00:33:10.873 IEEE OUI Identifier: 00 00 00 00:33:10.873 Multi-path I/O 00:33:10.873 May have multiple subsystem ports: Yes 00:33:10.873 May have multiple controllers: Yes 00:33:10.873 Associated with SR-IOV VF: No 00:33:10.873 Max Data Transfer Size: Unlimited 00:33:10.873 Max Number of Namespaces: 1024 00:33:10.873 Max Number of I/O Queues: 128 00:33:10.873 NVMe Specification Version (VS): 1.3 00:33:10.873 NVMe Specification Version (Identify): 1.3 00:33:10.873 Maximum Queue Entries: 1024 00:33:10.873 Contiguous Queues Required: No 00:33:10.873 Arbitration Mechanisms Supported 00:33:10.873 Weighted Round Robin: Not Supported 00:33:10.873 Vendor Specific: Not Supported 00:33:10.873 Reset Timeout: 7500 ms 00:33:10.873 Doorbell Stride: 4 bytes 00:33:10.873 NVM Subsystem Reset: Not Supported 00:33:10.873 Command Sets Supported 00:33:10.873 NVM Command Set: Supported 00:33:10.873 Boot Partition: Not Supported 00:33:10.873 Memory Page Size Minimum: 4096 bytes 00:33:10.873 Memory Page Size Maximum: 4096 bytes 00:33:10.873 Persistent Memory Region: Not Supported 00:33:10.873 Optional Asynchronous Events Supported 00:33:10.873 Namespace Attribute Notices: Supported 00:33:10.873 Firmware Activation Notices: Not Supported 00:33:10.873 ANA Change Notices: Supported 00:33:10.873 PLE Aggregate Log Change Notices: Not Supported 00:33:10.873 LBA Status Info Alert Notices: Not Supported 00:33:10.873 EGE Aggregate Log Change Notices: Not Supported 00:33:10.873 Normal NVM Subsystem Shutdown event: Not Supported 00:33:10.873 Zone Descriptor Change Notices: Not Supported 00:33:10.873 Discovery Log Change Notices: Not Supported 00:33:10.873 Controller Attributes 00:33:10.873 128-bit Host Identifier: Supported 00:33:10.873 Non-Operational Permissive Mode: Not Supported 00:33:10.873 NVM Sets: Not Supported 00:33:10.873 Read Recovery Levels: Not Supported 00:33:10.873 Endurance Groups: Not Supported 00:33:10.873 Predictable Latency Mode: Not Supported 00:33:10.873 Traffic Based Keep ALive: Supported 00:33:10.873 Namespace Granularity: Not Supported 00:33:10.873 SQ Associations: Not Supported 00:33:10.873 UUID List: Not Supported 00:33:10.873 Multi-Domain Subsystem: Not Supported 00:33:10.873 Fixed Capacity Management: Not Supported 00:33:10.873 Variable Capacity Management: Not Supported 00:33:10.873 Delete Endurance Group: Not Supported 00:33:10.873 Delete NVM Set: Not Supported 00:33:10.873 Extended LBA Formats Supported: Not Supported 00:33:10.873 Flexible Data Placement Supported: Not Supported 00:33:10.873 00:33:10.873 Controller Memory Buffer Support 00:33:10.873 ================================ 00:33:10.873 Supported: No 00:33:10.873 00:33:10.873 Persistent Memory Region Support 00:33:10.873 ================================ 00:33:10.873 Supported: No 00:33:10.873 00:33:10.873 Admin Command Set Attributes 00:33:10.873 ============================ 00:33:10.873 Security Send/Receive: Not Supported 00:33:10.873 Format NVM: Not Supported 00:33:10.873 Firmware Activate/Download: Not Supported 00:33:10.873 Namespace Management: Not Supported 00:33:10.873 Device Self-Test: Not Supported 00:33:10.873 Directives: Not Supported 00:33:10.873 NVMe-MI: Not Supported 00:33:10.873 Virtualization Management: Not Supported 00:33:10.873 Doorbell Buffer Config: Not Supported 00:33:10.873 Get LBA Status Capability: Not Supported 00:33:10.874 Command & Feature Lockdown Capability: Not Supported 00:33:10.874 Abort Command Limit: 4 00:33:10.874 Async Event Request Limit: 4 00:33:10.874 Number of Firmware Slots: N/A 00:33:10.874 Firmware Slot 1 Read-Only: N/A 00:33:10.874 Firmware Activation Without Reset: N/A 00:33:10.874 Multiple Update Detection Support: N/A 00:33:10.874 Firmware Update Granularity: No Information Provided 00:33:10.874 Per-Namespace SMART Log: Yes 00:33:10.874 Asymmetric Namespace Access Log Page: Supported 00:33:10.874 ANA Transition Time : 10 sec 00:33:10.874 00:33:10.874 Asymmetric Namespace Access Capabilities 00:33:10.874 ANA Optimized State : Supported 00:33:10.874 ANA Non-Optimized State : Supported 00:33:10.874 ANA Inaccessible State : Supported 00:33:10.874 ANA Persistent Loss State : Supported 00:33:10.874 ANA Change State : Supported 00:33:10.874 ANAGRPID is not changed : No 00:33:10.874 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:10.874 00:33:10.874 ANA Group Identifier Maximum : 128 00:33:10.874 Number of ANA Group Identifiers : 128 00:33:10.874 Max Number of Allowed Namespaces : 1024 00:33:10.874 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:10.874 Command Effects Log Page: Supported 00:33:10.874 Get Log Page Extended Data: Supported 00:33:10.874 Telemetry Log Pages: Not Supported 00:33:10.874 Persistent Event Log Pages: Not Supported 00:33:10.874 Supported Log Pages Log Page: May Support 00:33:10.874 Commands Supported & Effects Log Page: Not Supported 00:33:10.874 Feature Identifiers & Effects Log Page:May Support 00:33:10.874 NVMe-MI Commands & Effects Log Page: May Support 00:33:10.874 Data Area 4 for Telemetry Log: Not Supported 00:33:10.874 Error Log Page Entries Supported: 128 00:33:10.874 Keep Alive: Supported 00:33:10.874 Keep Alive Granularity: 1000 ms 00:33:10.874 00:33:10.874 NVM Command Set Attributes 00:33:10.874 ========================== 00:33:10.874 Submission Queue Entry Size 00:33:10.874 Max: 64 00:33:10.874 Min: 64 00:33:10.874 Completion Queue Entry Size 00:33:10.874 Max: 16 00:33:10.874 Min: 16 00:33:10.874 Number of Namespaces: 1024 00:33:10.874 Compare Command: Not Supported 00:33:10.874 Write Uncorrectable Command: Not Supported 00:33:10.874 Dataset Management Command: Supported 00:33:10.874 Write Zeroes Command: Supported 00:33:10.874 Set Features Save Field: Not Supported 00:33:10.874 Reservations: Not Supported 00:33:10.874 Timestamp: Not Supported 00:33:10.874 Copy: Not Supported 00:33:10.874 Volatile Write Cache: Present 00:33:10.874 Atomic Write Unit (Normal): 1 00:33:10.874 Atomic Write Unit (PFail): 1 00:33:10.874 Atomic Compare & Write Unit: 1 00:33:10.874 Fused Compare & Write: Not Supported 00:33:10.874 Scatter-Gather List 00:33:10.874 SGL Command Set: Supported 00:33:10.874 SGL Keyed: Not Supported 00:33:10.874 SGL Bit Bucket Descriptor: Not Supported 00:33:10.874 SGL Metadata Pointer: Not Supported 00:33:10.874 Oversized SGL: Not Supported 00:33:10.874 SGL Metadata Address: Not Supported 00:33:10.874 SGL Offset: Supported 00:33:10.874 Transport SGL Data Block: Not Supported 00:33:10.874 Replay Protected Memory Block: Not Supported 00:33:10.874 00:33:10.874 Firmware Slot Information 00:33:10.874 ========================= 00:33:10.874 Active slot: 0 00:33:10.874 00:33:10.874 Asymmetric Namespace Access 00:33:10.874 =========================== 00:33:10.874 Change Count : 0 00:33:10.874 Number of ANA Group Descriptors : 1 00:33:10.874 ANA Group Descriptor : 0 00:33:10.874 ANA Group ID : 1 00:33:10.874 Number of NSID Values : 1 00:33:10.874 Change Count : 0 00:33:10.874 ANA State : 1 00:33:10.874 Namespace Identifier : 1 00:33:10.874 00:33:10.874 Commands Supported and Effects 00:33:10.874 ============================== 00:33:10.874 Admin Commands 00:33:10.874 -------------- 00:33:10.874 Get Log Page (02h): Supported 00:33:10.874 Identify (06h): Supported 00:33:10.874 Abort (08h): Supported 00:33:10.874 Set Features (09h): Supported 00:33:10.874 Get Features (0Ah): Supported 00:33:10.874 Asynchronous Event Request (0Ch): Supported 00:33:10.874 Keep Alive (18h): Supported 00:33:10.874 I/O Commands 00:33:10.874 ------------ 00:33:10.874 Flush (00h): Supported 00:33:10.874 Write (01h): Supported LBA-Change 00:33:10.874 Read (02h): Supported 00:33:10.874 Write Zeroes (08h): Supported LBA-Change 00:33:10.874 Dataset Management (09h): Supported 00:33:10.874 00:33:10.874 Error Log 00:33:10.874 ========= 00:33:10.874 Entry: 0 00:33:10.874 Error Count: 0x3 00:33:10.874 Submission Queue Id: 0x0 00:33:10.874 Command Id: 0x5 00:33:10.874 Phase Bit: 0 00:33:10.874 Status Code: 0x2 00:33:10.874 Status Code Type: 0x0 00:33:10.874 Do Not Retry: 1 00:33:10.874 Error Location: 0x28 00:33:10.874 LBA: 0x0 00:33:10.874 Namespace: 0x0 00:33:10.874 Vendor Log Page: 0x0 00:33:10.874 ----------- 00:33:10.874 Entry: 1 00:33:10.874 Error Count: 0x2 00:33:10.874 Submission Queue Id: 0x0 00:33:10.874 Command Id: 0x5 00:33:10.874 Phase Bit: 0 00:33:10.874 Status Code: 0x2 00:33:10.874 Status Code Type: 0x0 00:33:10.874 Do Not Retry: 1 00:33:10.874 Error Location: 0x28 00:33:10.874 LBA: 0x0 00:33:10.874 Namespace: 0x0 00:33:10.874 Vendor Log Page: 0x0 00:33:10.874 ----------- 00:33:10.874 Entry: 2 00:33:10.874 Error Count: 0x1 00:33:10.874 Submission Queue Id: 0x0 00:33:10.874 Command Id: 0x4 00:33:10.874 Phase Bit: 0 00:33:10.874 Status Code: 0x2 00:33:10.874 Status Code Type: 0x0 00:33:10.874 Do Not Retry: 1 00:33:10.874 Error Location: 0x28 00:33:10.874 LBA: 0x0 00:33:10.874 Namespace: 0x0 00:33:10.874 Vendor Log Page: 0x0 00:33:10.874 00:33:10.874 Number of Queues 00:33:10.874 ================ 00:33:10.874 Number of I/O Submission Queues: 128 00:33:10.874 Number of I/O Completion Queues: 128 00:33:10.874 00:33:10.874 ZNS Specific Controller Data 00:33:10.874 ============================ 00:33:10.874 Zone Append Size Limit: 0 00:33:10.874 00:33:10.874 00:33:10.874 Active Namespaces 00:33:10.874 ================= 00:33:10.874 get_feature(0x05) failed 00:33:10.874 Namespace ID:1 00:33:10.874 Command Set Identifier: NVM (00h) 00:33:10.874 Deallocate: Supported 00:33:10.874 Deallocated/Unwritten Error: Not Supported 00:33:10.874 Deallocated Read Value: Unknown 00:33:10.874 Deallocate in Write Zeroes: Not Supported 00:33:10.874 Deallocated Guard Field: 0xFFFF 00:33:10.874 Flush: Supported 00:33:10.874 Reservation: Not Supported 00:33:10.874 Namespace Sharing Capabilities: Multiple Controllers 00:33:10.874 Size (in LBAs): 3750748848 (1788GiB) 00:33:10.874 Capacity (in LBAs): 3750748848 (1788GiB) 00:33:10.874 Utilization (in LBAs): 3750748848 (1788GiB) 00:33:10.874 UUID: 62efcf8f-4224-4d5d-9550-498f98ff4e7d 00:33:10.874 Thin Provisioning: Not Supported 00:33:10.874 Per-NS Atomic Units: Yes 00:33:10.874 Atomic Write Unit (Normal): 8 00:33:10.874 Atomic Write Unit (PFail): 8 00:33:10.874 Preferred Write Granularity: 8 00:33:10.874 Atomic Compare & Write Unit: 8 00:33:10.874 Atomic Boundary Size (Normal): 0 00:33:10.874 Atomic Boundary Size (PFail): 0 00:33:10.874 Atomic Boundary Offset: 0 00:33:10.874 NGUID/EUI64 Never Reused: No 00:33:10.874 ANA group ID: 1 00:33:10.874 Namespace Write Protected: No 00:33:10.874 Number of LBA Formats: 1 00:33:10.874 Current LBA Format: LBA Format #00 00:33:10.874 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:10.874 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:10.874 rmmod nvme_tcp 00:33:10.874 rmmod nvme_fabrics 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:33:10.874 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:33:10.875 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:10.875 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:10.875 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:10.875 16:57:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:12.790 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:33:13.051 16:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:16.359 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:16.359 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:16.619 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:16.619 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:16.619 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:16.619 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:16.619 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:16.880 00:33:16.880 real 0m19.209s 00:33:16.880 user 0m5.368s 00:33:16.880 sys 0m10.954s 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.880 ************************************ 00:33:16.880 END TEST nvmf_identify_kernel_target 00:33:16.880 ************************************ 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.880 ************************************ 00:33:16.880 START TEST nvmf_auth_host 00:33:16.880 ************************************ 00:33:16.880 16:57:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:17.141 * Looking for test storage... 00:33:17.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.141 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:17.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.141 --rc genhtml_branch_coverage=1 00:33:17.141 --rc genhtml_function_coverage=1 00:33:17.142 --rc genhtml_legend=1 00:33:17.142 --rc geninfo_all_blocks=1 00:33:17.142 --rc geninfo_unexecuted_blocks=1 00:33:17.142 00:33:17.142 ' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.142 --rc genhtml_branch_coverage=1 00:33:17.142 --rc genhtml_function_coverage=1 00:33:17.142 --rc genhtml_legend=1 00:33:17.142 --rc geninfo_all_blocks=1 00:33:17.142 --rc geninfo_unexecuted_blocks=1 00:33:17.142 00:33:17.142 ' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.142 --rc genhtml_branch_coverage=1 00:33:17.142 --rc genhtml_function_coverage=1 00:33:17.142 --rc genhtml_legend=1 00:33:17.142 --rc geninfo_all_blocks=1 00:33:17.142 --rc geninfo_unexecuted_blocks=1 00:33:17.142 00:33:17.142 ' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.142 --rc genhtml_branch_coverage=1 00:33:17.142 --rc genhtml_function_coverage=1 00:33:17.142 --rc genhtml_legend=1 00:33:17.142 --rc geninfo_all_blocks=1 00:33:17.142 --rc geninfo_unexecuted_blocks=1 00:33:17.142 00:33:17.142 ' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:17.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:33:17.142 16:57:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:25.288 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:25.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.288 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:25.289 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:25.289 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@247 -- # create_target_ns 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:25.289 10.0.0.1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:25.289 10.0.0.2 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:25.289 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:25.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.607 ms 00:33:25.290 00:33:25.290 --- 10.0.0.1 ping statistics --- 00:33:25.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.290 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:25.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:33:25.290 00:33:25.290 --- 10.0.0.2 ping statistics --- 00:33:25.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.290 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:25.290 ' 00:33:25.290 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=3331894 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 3331894 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3331894 ']' 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:25.291 16:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=32b63f4978eb18d3649cdbe1baefefe5 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.G5D 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 32b63f4978eb18d3649cdbe1baefefe5 0 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 32b63f4978eb18d3649cdbe1baefefe5 0 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=32b63f4978eb18d3649cdbe1baefefe5 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:33:25.551 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.G5D 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.G5D 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.G5D 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=60c0ebd77cbb7dc1b46828153b465888cb1e4120906ffc2e279f74a542442c2a 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.yBJ 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 60c0ebd77cbb7dc1b46828153b465888cb1e4120906ffc2e279f74a542442c2a 3 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 60c0ebd77cbb7dc1b46828153b465888cb1e4120906ffc2e279f74a542442c2a 3 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=60c0ebd77cbb7dc1b46828153b465888cb1e4120906ffc2e279f74a542442c2a 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.yBJ 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.yBJ 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.yBJ 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=27c342763a329fc45e9e90e262b11c35cd3395206b36f4df 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.CXt 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 27c342763a329fc45e9e90e262b11c35cd3395206b36f4df 0 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 27c342763a329fc45e9e90e262b11c35cd3395206b36f4df 0 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=27c342763a329fc45e9e90e262b11c35cd3395206b36f4df 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.CXt 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.CXt 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.CXt 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=9c9acc14b7aca95d40a19545b1b4274f4e8549cb56d5756a 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.iSc 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 9c9acc14b7aca95d40a19545b1b4274f4e8549cb56d5756a 2 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 9c9acc14b7aca95d40a19545b1b4274f4e8549cb56d5756a 2 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=9c9acc14b7aca95d40a19545b1b4274f4e8549cb56d5756a 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.iSc 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.iSc 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iSc 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c74d63a43f6e9c8bc0984e276bf0eb3f 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.kf8 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c74d63a43f6e9c8bc0984e276bf0eb3f 1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c74d63a43f6e9c8bc0984e276bf0eb3f 1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c74d63a43f6e9c8bc0984e276bf0eb3f 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:33:25.813 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.kf8 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.kf8 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kf8 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=8cd258e05ff439ef76a838df6f674488 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.MqN 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 8cd258e05ff439ef76a838df6f674488 1 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 8cd258e05ff439ef76a838df6f674488 1 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=8cd258e05ff439ef76a838df6f674488 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.MqN 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.MqN 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MqN 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=94ac9da1990d49c7727eaecb5024db2d28024651a4c8d63f 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.Ej2 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 94ac9da1990d49c7727eaecb5024db2d28024651a4c8d63f 2 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 94ac9da1990d49c7727eaecb5024db2d28024651a4c8d63f 2 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=94ac9da1990d49c7727eaecb5024db2d28024651a4c8d63f 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:33:26.075 16:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.Ej2 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.Ej2 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ej2 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=a1376f0fc7c49434319ac22eb8cd5fbc 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.m1D 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key a1376f0fc7c49434319ac22eb8cd5fbc 0 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 a1376f0fc7c49434319ac22eb8cd5fbc 0 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=a1376f0fc7c49434319ac22eb8cd5fbc 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.m1D 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.m1D 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.m1D 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=3e5f9bd6b7bc5e3fb1a0f0e5b741bd61d4861f93c6466a7a838aae7a1eab6cf9 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.dC0 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 3e5f9bd6b7bc5e3fb1a0f0e5b741bd61d4861f93c6466a7a838aae7a1eab6cf9 3 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 3e5f9bd6b7bc5e3fb1a0f0e5b741bd61d4861f93c6466a7a838aae7a1eab6cf9 3 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=3e5f9bd6b7bc5e3fb1a0f0e5b741bd61d4861f93c6466a7a838aae7a1eab6cf9 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:33:26.075 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.dC0 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.dC0 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dC0 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3331894 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3331894 ']' 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.G5D 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.yBJ ]] 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yBJ 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CXt 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iSc ]] 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iSc 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.337 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kf8 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MqN ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MqN 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ej2 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.m1D ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.m1D 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dC0 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:26.598 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:26.599 16:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:29.894 Waiting for block devices as requested 00:33:29.894 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:29.894 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:29.894 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:30.153 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:30.153 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:30.153 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:30.153 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:30.413 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:30.413 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:30.673 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:30.673 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:30.673 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:30.933 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:30.933 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:30.933 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:30.933 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:31.193 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:32.132 No valid GPT data, bailing 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:33:32.132 16:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:32.132 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:32.132 00:33:32.132 Discovery Log Number of Records 2, Generation counter 2 00:33:32.132 =====Discovery Log Entry 0====== 00:33:32.132 trtype: tcp 00:33:32.132 adrfam: ipv4 00:33:32.132 subtype: current discovery subsystem 00:33:32.132 treq: not specified, sq flow control disable supported 00:33:32.132 portid: 1 00:33:32.132 trsvcid: 4420 00:33:32.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:32.132 traddr: 10.0.0.1 00:33:32.132 eflags: none 00:33:32.132 sectype: none 00:33:32.132 =====Discovery Log Entry 1====== 00:33:32.132 trtype: tcp 00:33:32.132 adrfam: ipv4 00:33:32.132 subtype: nvme subsystem 00:33:32.132 treq: not specified, sq flow control disable supported 00:33:32.132 portid: 1 00:33:32.132 trsvcid: 4420 00:33:32.132 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:32.132 traddr: 10.0.0.1 00:33:32.132 eflags: none 00:33:32.132 sectype: none 00:33:32.132 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.133 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.393 nvme0n1 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:32.393 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:32.394 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:32.394 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:32.394 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:32.394 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:32.394 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.394 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.657 nvme0n1 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.657 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:32.658 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.659 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.660 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.931 nvme0n1 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.931 16:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.191 nvme0n1 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.191 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.192 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.452 nvme0n1 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:33.452 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.453 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.713 nvme0n1 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.713 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.714 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.974 nvme0n1 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:33.974 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.975 16:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.236 nvme0n1 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.236 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.497 nvme0n1 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.497 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.758 nvme0n1 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.758 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.019 nvme0n1 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.019 16:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.019 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.279 nvme0n1 00:33:35.279 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.279 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.279 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.279 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.279 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.540 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.801 nvme0n1 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.801 16:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 nvme0n1 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.062 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.337 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.338 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.648 nvme0n1 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.648 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.649 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.915 nvme0n1 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.915 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.916 16:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.485 nvme0n1 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:37.485 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.486 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.055 nvme0n1 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.056 16:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.056 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.625 nvme0n1 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.625 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.626 16:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.196 nvme0n1 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.196 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.768 nvme0n1 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.768 16:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.710 nvme0n1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.710 16:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.280 nvme0n1 00:33:41.280 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.280 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.280 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.280 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.280 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.280 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:41.541 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.542 16:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.112 nvme0n1 00:33:42.112 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.373 16:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.316 nvme0n1 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.316 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.888 nvme0n1 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.888 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.149 16:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.149 nvme0n1 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:44.149 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:44.150 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:44.150 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:44.150 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:44.150 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:44.150 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:44.150 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.410 nvme0n1 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:44.410 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:44.411 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:44.411 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:44.411 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.411 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 nvme0n1 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.672 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.933 nvme0n1 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.933 16:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.194 nvme0n1 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.194 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.455 nvme0n1 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.455 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.456 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.717 nvme0n1 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:45.717 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.978 nvme0n1 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.978 16:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.978 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.978 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.978 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.978 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.239 nvme0n1 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.239 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.500 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.500 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.500 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.500 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.501 nvme0n1 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.501 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.762 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.023 nvme0n1 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:47.023 16:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:47.024 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:47.024 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:47.024 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:47.024 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:47.024 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.024 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.284 nvme0n1 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.284 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.545 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.805 nvme0n1 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.805 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.806 16:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.066 nvme0n1 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:48.066 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.067 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.637 nvme0n1 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.637 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.207 nvme0n1 00:33:49.207 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.207 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.207 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.207 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.207 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.207 16:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.207 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.468 nvme0n1 00:33:49.468 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.468 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.468 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.468 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.468 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.728 16:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.299 nvme0n1 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.299 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.871 nvme0n1 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.871 16:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.442 nvme0n1 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.442 16:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.012 nvme0n1 00:33:52.012 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.012 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.012 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.012 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.012 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.012 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:52.272 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:52.273 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.273 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.273 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.212 nvme0n1 00:33:53.212 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.212 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.212 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.212 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.212 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:53.213 16:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.213 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.784 nvme0n1 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:53.784 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.045 16:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.617 nvme0n1 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:54.617 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:54.877 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.878 16:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.449 nvme0n1 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.449 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.709 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.710 nvme0n1 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.710 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 nvme0n1 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 16:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.971 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.232 nvme0n1 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.232 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:56.233 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.494 nvme0n1 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.494 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.754 nvme0n1 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:56.754 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.755 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.015 nvme0n1 00:33:57.015 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.015 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.015 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.015 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.015 16:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.015 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.015 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.015 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:57.016 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:57.276 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.277 nvme0n1 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.277 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.538 nvme0n1 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.538 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.799 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.800 nvme0n1 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.800 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.061 16:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.061 nvme0n1 00:33:58.061 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.323 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.584 nvme0n1 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:58.584 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.585 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.846 nvme0n1 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.846 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:59.106 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:59.107 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.107 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.107 16:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.367 nvme0n1 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.367 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.629 nvme0n1 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.629 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.630 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.890 nvme0n1 00:33:59.890 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.152 16:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.152 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.724 nvme0n1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.725 16:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.370 nvme0n1 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.370 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.630 nvme0n1 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.630 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.891 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.892 16:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.153 nvme0n1 00:34:02.153 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.153 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.153 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.153 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.153 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.153 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.413 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.413 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.413 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.413 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.414 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.983 nvme0n1 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.983 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJiNjNmNDk3OGViMThkMzY0OWNkYmUxYmFlZmVmZTW824Y4: 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBjMGViZDc3Y2JiN2RjMWI0NjgyODE1M2I0NjU4ODhjYjFlNDEyMDkwNmZmYzJlMjc5Zjc0YTU0MjQ0MmMyYbg8vpA=: 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.984 16:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.554 nvme0n1 00:34:03.554 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.554 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.554 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.554 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.554 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.814 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.815 16:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.755 nvme0n1 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.755 16:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.325 nvme0n1 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTRhYzlkYTE5OTBkNDljNzcyN2VhZWNiNTAyNGRiMmQyODAyNDY1MWE0YzhkNjNmwrPxFw==: 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: ]] 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTEzNzZmMGZjN2M0OTQzNDMxOWFjMjJlYjhjZDVmYmPCxDB4: 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.325 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.585 16:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.155 nvme0n1 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.155 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1ZjliZDZiN2JjNWUzZmIxYTBmMGU1Yjc0MWJkNjFkNDg2MWY5M2M2NDY2YTdhODM4YWFlN2ExZWFiNmNmOXNWKL8=: 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.415 16:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.984 nvme0n1 00:34:06.984 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.984 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.984 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.984 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.984 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.984 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.245 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.245 request: 00:34:07.245 { 00:34:07.245 "name": "nvme0", 00:34:07.245 "trtype": "tcp", 00:34:07.245 "traddr": "10.0.0.1", 00:34:07.245 "adrfam": "ipv4", 00:34:07.245 "trsvcid": "4420", 00:34:07.245 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:07.246 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:07.246 "prchk_reftag": false, 00:34:07.246 "prchk_guard": false, 00:34:07.246 "hdgst": false, 00:34:07.246 "ddgst": false, 00:34:07.246 "allow_unrecognized_csi": false, 00:34:07.246 "method": "bdev_nvme_attach_controller", 00:34:07.246 "req_id": 1 00:34:07.246 } 00:34:07.246 Got JSON-RPC error response 00:34:07.246 response: 00:34:07.246 { 00:34:07.246 "code": -5, 00:34:07.246 "message": "Input/output error" 00:34:07.246 } 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.246 request: 00:34:07.246 { 00:34:07.246 "name": "nvme0", 00:34:07.246 "trtype": "tcp", 00:34:07.246 "traddr": "10.0.0.1", 00:34:07.246 "adrfam": "ipv4", 00:34:07.246 "trsvcid": "4420", 00:34:07.246 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:07.246 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:07.246 "prchk_reftag": false, 00:34:07.246 "prchk_guard": false, 00:34:07.246 "hdgst": false, 00:34:07.246 "ddgst": false, 00:34:07.246 "dhchap_key": "key2", 00:34:07.246 "allow_unrecognized_csi": false, 00:34:07.246 "method": "bdev_nvme_attach_controller", 00:34:07.246 "req_id": 1 00:34:07.246 } 00:34:07.246 Got JSON-RPC error response 00:34:07.246 response: 00:34:07.246 { 00:34:07.246 "code": -5, 00:34:07.246 "message": "Input/output error" 00:34:07.246 } 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:07.246 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.506 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.506 request: 00:34:07.506 { 00:34:07.506 "name": "nvme0", 00:34:07.506 "trtype": "tcp", 00:34:07.506 "traddr": "10.0.0.1", 00:34:07.506 "adrfam": "ipv4", 00:34:07.506 "trsvcid": "4420", 00:34:07.506 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:07.506 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:07.506 "prchk_reftag": false, 00:34:07.506 "prchk_guard": false, 00:34:07.506 "hdgst": false, 00:34:07.506 "ddgst": false, 00:34:07.506 "dhchap_key": "key1", 00:34:07.506 "dhchap_ctrlr_key": "ckey2", 00:34:07.506 "allow_unrecognized_csi": false, 00:34:07.506 "method": "bdev_nvme_attach_controller", 00:34:07.506 "req_id": 1 00:34:07.506 } 00:34:07.506 Got JSON-RPC error response 00:34:07.506 response: 00:34:07.507 { 00:34:07.507 "code": -5, 00:34:07.507 "message": "Input/output error" 00:34:07.507 } 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.507 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.767 nvme0n1 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.767 request: 00:34:07.767 { 00:34:07.767 "name": "nvme0", 00:34:07.767 "dhchap_key": "key1", 00:34:07.767 "dhchap_ctrlr_key": "ckey2", 00:34:07.767 "method": "bdev_nvme_set_keys", 00:34:07.767 "req_id": 1 00:34:07.767 } 00:34:07.767 Got JSON-RPC error response 00:34:07.767 response: 00:34:07.767 { 00:34:07.767 "code": -13, 00:34:07.767 "message": "Permission denied" 00:34:07.767 } 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.767 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.768 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:07.768 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.768 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.768 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.027 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:08.027 16:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:08.967 16:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdjMzQyNzYzYTMyOWZjNDVlOWU5MGUyNjJiMTFjMzVjZDMzOTUyMDZiMzZmNGRmpkfSdA==: 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: ]] 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM5YWNjMTRiN2FjYTk1ZDQwYTE5NTQ1YjFiNDI3NGY0ZTg1NDljYjU2ZDU3NTZhKZJHFA==: 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:09.907 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.167 16:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.167 nvme0n1 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:10.167 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc0ZDYzYTQzZjZlOWM4YmMwOTg0ZTI3NmJmMGViM2bO3BAr: 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: ]] 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGNkMjU4ZTA1ZmY0MzllZjc2YTgzOGRmNmY2NzQ0ODjx1YGJ: 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.168 request: 00:34:10.168 { 00:34:10.168 "name": "nvme0", 00:34:10.168 "dhchap_key": "key2", 00:34:10.168 "dhchap_ctrlr_key": "ckey1", 00:34:10.168 "method": "bdev_nvme_set_keys", 00:34:10.168 "req_id": 1 00:34:10.168 } 00:34:10.168 Got JSON-RPC error response 00:34:10.168 response: 00:34:10.168 { 00:34:10.168 "code": -13, 00:34:10.168 "message": "Permission denied" 00:34:10.168 } 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.168 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.428 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:10.428 16:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:11.367 rmmod nvme_tcp 00:34:11.367 rmmod nvme_fabrics 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 3331894 ']' 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 3331894 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3331894 ']' 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3331894 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3331894 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3331894' 00:34:11.367 killing process with pid 3331894 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3331894 00:34:11.367 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3331894 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:11.628 16:58:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:13.537 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:13.538 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:34:13.798 16:58:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:17.100 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:17.100 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:17.361 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:17.932 16:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.G5D /tmp/spdk.key-null.CXt /tmp/spdk.key-sha256.kf8 /tmp/spdk.key-sha384.Ej2 /tmp/spdk.key-sha512.dC0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:17.932 16:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:21.233 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:21.233 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:21.233 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:21.493 00:34:21.493 real 1m4.504s 00:34:21.493 user 0m58.555s 00:34:21.493 sys 0m16.429s 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.493 ************************************ 00:34:21.493 END TEST nvmf_auth_host 00:34:21.493 ************************************ 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.493 ************************************ 00:34:21.493 START TEST nvmf_digest 00:34:21.493 ************************************ 00:34:21.493 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:21.754 * Looking for test storage... 00:34:21.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.754 --rc genhtml_branch_coverage=1 00:34:21.754 --rc genhtml_function_coverage=1 00:34:21.754 --rc genhtml_legend=1 00:34:21.754 --rc geninfo_all_blocks=1 00:34:21.754 --rc geninfo_unexecuted_blocks=1 00:34:21.754 00:34:21.754 ' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.754 --rc genhtml_branch_coverage=1 00:34:21.754 --rc genhtml_function_coverage=1 00:34:21.754 --rc genhtml_legend=1 00:34:21.754 --rc geninfo_all_blocks=1 00:34:21.754 --rc geninfo_unexecuted_blocks=1 00:34:21.754 00:34:21.754 ' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.754 --rc genhtml_branch_coverage=1 00:34:21.754 --rc genhtml_function_coverage=1 00:34:21.754 --rc genhtml_legend=1 00:34:21.754 --rc geninfo_all_blocks=1 00:34:21.754 --rc geninfo_unexecuted_blocks=1 00:34:21.754 00:34:21.754 ' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.754 --rc genhtml_branch_coverage=1 00:34:21.754 --rc genhtml_function_coverage=1 00:34:21.754 --rc genhtml_legend=1 00:34:21.754 --rc geninfo_all_blocks=1 00:34:21.754 --rc geninfo_unexecuted_blocks=1 00:34:21.754 00:34:21.754 ' 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.754 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:21.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:34:21.755 16:58:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:29.898 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:29.898 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:29.898 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:29.898 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@247 -- # create_target_ns 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:29.898 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:29.899 10.0.0.1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:29.899 10.0.0.2 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:29.899 16:58:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:29.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.554 ms 00:34:29.899 00:34:29.899 --- 10.0.0.1 ping statistics --- 00:34:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.899 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:29.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:34:29.899 00:34:29.899 --- 10.0.0.2 ping statistics --- 00:34:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.899 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:29.899 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:34:29.900 ' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.900 ************************************ 00:34:29.900 START TEST nvmf_digest_clean 00:34:29.900 ************************************ 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=3349910 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 3349910 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3349910 ']' 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:29.900 16:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.900 [2024-11-05 16:58:36.281890] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:29.900 [2024-11-05 16:58:36.281959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.900 [2024-11-05 16:58:36.364519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.900 [2024-11-05 16:58:36.404872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.900 [2024-11-05 16:58:36.404910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.900 [2024-11-05 16:58:36.404919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.900 [2024-11-05 16:58:36.404927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.900 [2024-11-05 16:58:36.404934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.900 [2024-11-05 16:58:36.405515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.161 null0 00:34:30.161 [2024-11-05 16:58:37.177684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.161 [2024-11-05 16:58:37.201903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3350159 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3350159 /var/tmp/bperf.sock 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3350159 ']' 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:30.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:30.161 16:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.422 [2024-11-05 16:58:37.260751] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:30.422 [2024-11-05 16:58:37.260801] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350159 ] 00:34:30.422 [2024-11-05 16:58:37.347719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.422 [2024-11-05 16:58:37.383298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.992 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:30.992 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:30.992 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:30.992 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:30.992 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:31.253 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.253 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.513 nvme0n1 00:34:31.513 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:31.513 16:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:31.773 Running I/O for 2 seconds... 00:34:33.654 18218.00 IOPS, 71.16 MiB/s [2024-11-05T15:58:40.717Z] 18312.50 IOPS, 71.53 MiB/s 00:34:33.654 Latency(us) 00:34:33.654 [2024-11-05T15:58:40.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.654 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:33.654 nvme0n1 : 2.05 17950.20 70.12 0.00 0.00 6979.14 3604.48 49588.91 00:34:33.654 [2024-11-05T15:58:40.717Z] =================================================================================================================== 00:34:33.654 [2024-11-05T15:58:40.717Z] Total : 17950.20 70.12 0.00 0.00 6979.14 3604.48 49588.91 00:34:33.654 { 00:34:33.654 "results": [ 00:34:33.654 { 00:34:33.654 "job": "nvme0n1", 00:34:33.654 "core_mask": "0x2", 00:34:33.654 "workload": "randread", 00:34:33.654 "status": "finished", 00:34:33.654 "queue_depth": 128, 00:34:33.654 "io_size": 4096, 00:34:33.654 "runtime": 2.047498, 00:34:33.654 "iops": 17950.20068395671, 00:34:33.654 "mibps": 70.1179714217059, 00:34:33.654 "io_failed": 0, 00:34:33.654 "io_timeout": 0, 00:34:33.654 "avg_latency_us": 6979.144677169211, 00:34:33.654 "min_latency_us": 3604.48, 00:34:33.654 "max_latency_us": 49588.90666666667 00:34:33.654 } 00:34:33.654 ], 00:34:33.654 "core_count": 1 00:34:33.654 } 00:34:33.654 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:33.654 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:33.654 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:33.654 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:33.654 | select(.opcode=="crc32c") 00:34:33.654 | "\(.module_name) \(.executed)"' 00:34:33.654 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3350159 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3350159 ']' 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3350159 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3350159 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3350159' 00:34:33.914 killing process with pid 3350159 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3350159 00:34:33.914 Received shutdown signal, test time was about 2.000000 seconds 00:34:33.914 00:34:33.914 Latency(us) 00:34:33.914 [2024-11-05T15:58:40.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.914 [2024-11-05T15:58:40.977Z] =================================================================================================================== 00:34:33.914 [2024-11-05T15:58:40.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:33.914 16:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3350159 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3350845 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3350845 /var/tmp/bperf.sock 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3350845 ']' 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.175 [2024-11-05 16:58:41.078162] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:34.175 [2024-11-05 16:58:41.078212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350845 ] 00:34:34.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:34.175 Zero copy mechanism will not be used. 00:34:34.175 [2024-11-05 16:58:41.152175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.175 [2024-11-05 16:58:41.181152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:34.175 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:34.436 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.436 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.695 nvme0n1 00:34:34.695 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:34.695 16:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:34.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:34.695 Zero copy mechanism will not be used. 00:34:34.695 Running I/O for 2 seconds... 00:34:37.017 2998.00 IOPS, 374.75 MiB/s [2024-11-05T15:58:44.080Z] 3245.00 IOPS, 405.62 MiB/s 00:34:37.017 Latency(us) 00:34:37.017 [2024-11-05T15:58:44.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.017 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:37.017 nvme0n1 : 2.00 3246.23 405.78 0.00 0.00 4925.47 785.07 11468.80 00:34:37.017 [2024-11-05T15:58:44.080Z] =================================================================================================================== 00:34:37.017 [2024-11-05T15:58:44.080Z] Total : 3246.23 405.78 0.00 0.00 4925.47 785.07 11468.80 00:34:37.017 { 00:34:37.017 "results": [ 00:34:37.017 { 00:34:37.017 "job": "nvme0n1", 00:34:37.017 "core_mask": "0x2", 00:34:37.017 "workload": "randread", 00:34:37.017 "status": "finished", 00:34:37.017 "queue_depth": 16, 00:34:37.017 "io_size": 131072, 00:34:37.017 "runtime": 2.004171, 00:34:37.017 "iops": 3246.229987361358, 00:34:37.017 "mibps": 405.77874842016973, 00:34:37.017 "io_failed": 0, 00:34:37.017 "io_timeout": 0, 00:34:37.017 "avg_latency_us": 4925.466494517882, 00:34:37.017 "min_latency_us": 785.0666666666667, 00:34:37.017 "max_latency_us": 11468.8 00:34:37.017 } 00:34:37.017 ], 00:34:37.017 "core_count": 1 00:34:37.017 } 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:37.017 | select(.opcode=="crc32c") 00:34:37.017 | "\(.module_name) \(.executed)"' 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3350845 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3350845 ']' 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3350845 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:37.017 16:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3350845 00:34:37.017 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:37.017 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:37.017 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3350845' 00:34:37.017 killing process with pid 3350845 00:34:37.017 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3350845 00:34:37.017 Received shutdown signal, test time was about 2.000000 seconds 00:34:37.017 00:34:37.017 Latency(us) 00:34:37.017 [2024-11-05T15:58:44.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.017 [2024-11-05T15:58:44.080Z] =================================================================================================================== 00:34:37.017 [2024-11-05T15:58:44.080Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:37.017 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3350845 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3351522 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3351522 /var/tmp/bperf.sock 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3351522 ']' 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:37.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:37.277 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.277 [2024-11-05 16:58:44.171994] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:37.277 [2024-11-05 16:58:44.172053] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351522 ] 00:34:37.278 [2024-11-05 16:58:44.257774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.278 [2024-11-05 16:58:44.287034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.218 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:38.218 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:38.218 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:38.218 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:38.218 16:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:38.218 16:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.218 16:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.478 nvme0n1 00:34:38.478 16:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:38.478 16:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:38.738 Running I/O for 2 seconds... 00:34:40.616 21450.00 IOPS, 83.79 MiB/s [2024-11-05T15:58:47.679Z] 21574.00 IOPS, 84.27 MiB/s 00:34:40.616 Latency(us) 00:34:40.616 [2024-11-05T15:58:47.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.616 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.616 nvme0n1 : 2.01 21574.11 84.27 0.00 0.00 5925.44 2266.45 17585.49 00:34:40.616 [2024-11-05T15:58:47.679Z] =================================================================================================================== 00:34:40.616 [2024-11-05T15:58:47.679Z] Total : 21574.11 84.27 0.00 0.00 5925.44 2266.45 17585.49 00:34:40.616 { 00:34:40.616 "results": [ 00:34:40.616 { 00:34:40.616 "job": "nvme0n1", 00:34:40.616 "core_mask": "0x2", 00:34:40.616 "workload": "randwrite", 00:34:40.616 "status": "finished", 00:34:40.616 "queue_depth": 128, 00:34:40.616 "io_size": 4096, 00:34:40.616 "runtime": 2.005923, 00:34:40.616 "iops": 21574.108278333715, 00:34:40.616 "mibps": 84.27386046224107, 00:34:40.616 "io_failed": 0, 00:34:40.616 "io_timeout": 0, 00:34:40.616 "avg_latency_us": 5925.442238038019, 00:34:40.616 "min_latency_us": 2266.4533333333334, 00:34:40.616 "max_latency_us": 17585.493333333332 00:34:40.616 } 00:34:40.616 ], 00:34:40.616 "core_count": 1 00:34:40.616 } 00:34:40.616 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:40.616 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:40.617 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:40.617 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:40.617 | select(.opcode=="crc32c") 00:34:40.617 | "\(.module_name) \(.executed)"' 00:34:40.617 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3351522 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3351522 ']' 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3351522 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3351522 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3351522' 00:34:40.876 killing process with pid 3351522 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3351522 00:34:40.876 Received shutdown signal, test time was about 2.000000 seconds 00:34:40.876 00:34:40.876 Latency(us) 00:34:40.876 [2024-11-05T15:58:47.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.876 [2024-11-05T15:58:47.939Z] =================================================================================================================== 00:34:40.876 [2024-11-05T15:58:47.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:40.876 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3351522 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3352202 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3352202 /var/tmp/bperf.sock 00:34:41.136 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3352202 ']' 00:34:41.137 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:41.137 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:41.137 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:41.137 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:41.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:41.137 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:41.137 16:58:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.137 [2024-11-05 16:58:48.015148] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:41.137 [2024-11-05 16:58:48.015206] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352202 ] 00:34:41.137 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:41.137 Zero copy mechanism will not be used. 00:34:41.137 [2024-11-05 16:58:48.098837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.137 [2024-11-05 16:58:48.127809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.810 16:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:41.810 16:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:41.810 16:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:41.810 16:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:41.810 16:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:42.069 16:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.069 16:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.329 nvme0n1 00:34:42.329 16:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:42.329 16:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:42.329 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.329 Zero copy mechanism will not be used. 00:34:42.329 Running I/O for 2 seconds... 00:34:44.653 3170.00 IOPS, 396.25 MiB/s [2024-11-05T15:58:51.716Z] 4073.00 IOPS, 509.12 MiB/s 00:34:44.653 Latency(us) 00:34:44.653 [2024-11-05T15:58:51.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.653 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:44.653 nvme0n1 : 2.00 4075.88 509.48 0.00 0.00 3921.30 1713.49 8355.84 00:34:44.653 [2024-11-05T15:58:51.716Z] =================================================================================================================== 00:34:44.653 [2024-11-05T15:58:51.716Z] Total : 4075.88 509.48 0.00 0.00 3921.30 1713.49 8355.84 00:34:44.653 { 00:34:44.653 "results": [ 00:34:44.653 { 00:34:44.653 "job": "nvme0n1", 00:34:44.653 "core_mask": "0x2", 00:34:44.653 "workload": "randwrite", 00:34:44.653 "status": "finished", 00:34:44.653 "queue_depth": 16, 00:34:44.653 "io_size": 131072, 00:34:44.653 "runtime": 2.003249, 00:34:44.653 "iops": 4075.8787349950007, 00:34:44.653 "mibps": 509.4848418743751, 00:34:44.653 "io_failed": 0, 00:34:44.653 "io_timeout": 0, 00:34:44.653 "avg_latency_us": 3921.3046384976524, 00:34:44.653 "min_latency_us": 1713.4933333333333, 00:34:44.653 "max_latency_us": 8355.84 00:34:44.653 } 00:34:44.653 ], 00:34:44.653 "core_count": 1 00:34:44.653 } 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:44.653 | select(.opcode=="crc32c") 00:34:44.653 | "\(.module_name) \(.executed)"' 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:44.653 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3352202 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3352202 ']' 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3352202 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3352202 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3352202' 00:34:44.654 killing process with pid 3352202 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3352202 00:34:44.654 Received shutdown signal, test time was about 2.000000 seconds 00:34:44.654 00:34:44.654 Latency(us) 00:34:44.654 [2024-11-05T15:58:51.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.654 [2024-11-05T15:58:51.717Z] =================================================================================================================== 00:34:44.654 [2024-11-05T15:58:51.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.654 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3352202 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3349910 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3349910 ']' 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3349910 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349910 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:44.914 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349910' 00:34:44.914 killing process with pid 3349910 00:34:44.915 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3349910 00:34:44.915 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3349910 00:34:44.915 00:34:44.915 real 0m15.744s 00:34:44.915 user 0m31.155s 00:34:44.915 sys 0m3.388s 00:34:44.915 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:44.915 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:44.915 ************************************ 00:34:44.915 END TEST nvmf_digest_clean 00:34:44.915 ************************************ 00:34:45.269 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:45.269 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:45.269 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:45.269 16:58:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:45.269 ************************************ 00:34:45.269 START TEST nvmf_digest_error 00:34:45.269 ************************************ 00:34:45.269 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:34:45.269 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:45.269 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:45.269 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=3352996 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 3352996 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3352996 ']' 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:45.270 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:45.270 [2024-11-05 16:58:52.102608] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:45.270 [2024-11-05 16:58:52.102667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.270 [2024-11-05 16:58:52.183914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.270 [2024-11-05 16:58:52.224524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.270 [2024-11-05 16:58:52.224562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.270 [2024-11-05 16:58:52.224569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.270 [2024-11-05 16:58:52.224576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.270 [2024-11-05 16:58:52.224582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.270 [2024-11-05 16:58:52.225176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.841 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:45.841 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:45.841 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:45.841 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:45.841 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.104 [2024-11-05 16:58:52.927188] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.104 16:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.104 null0 00:34:46.104 [2024-11-05 16:58:53.009339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.104 [2024-11-05 16:58:53.033551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3353263 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3353263 /var/tmp/bperf.sock 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3353263 ']' 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:46.104 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.104 [2024-11-05 16:58:53.101617] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:46.104 [2024-11-05 16:58:53.101666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353263 ] 00:34:46.365 [2024-11-05 16:58:53.184539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.365 [2024-11-05 16:58:53.214548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.936 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:46.936 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:46.936 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:46.936 16:58:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:47.196 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:47.196 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.196 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.196 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.196 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.196 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.456 nvme0n1 00:34:47.456 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:47.456 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.456 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.456 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.456 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:47.456 16:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:47.717 Running I/O for 2 seconds... 00:34:47.717 [2024-11-05 16:58:54.570959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.570992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.571002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.584790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.584812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.584819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.597864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.597884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.597891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.611072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.611091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.611098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.623581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.623601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.623615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.635682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.635700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.635706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.647661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.647679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.647685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.659924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.659942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.659949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.673247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.673271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.686297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.686314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.686320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.698459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.698476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.698482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.710004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.710022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.710028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.723943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.723960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.723966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.735675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.717 [2024-11-05 16:58:54.735695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.717 [2024-11-05 16:58:54.735702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.717 [2024-11-05 16:58:54.747256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.718 [2024-11-05 16:58:54.747273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.718 [2024-11-05 16:58:54.747279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.718 [2024-11-05 16:58:54.759223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.718 [2024-11-05 16:58:54.759241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.718 [2024-11-05 16:58:54.759247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.718 [2024-11-05 16:58:54.772226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.718 [2024-11-05 16:58:54.772243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.718 [2024-11-05 16:58:54.772250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.786008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.786026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.786032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.796845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.796862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.796869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.809550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.809567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.809574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.823440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.823458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.823464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.836054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.836074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.847343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.847360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.847366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.861255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.861272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.861279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.873269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.873287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.873295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.884953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.884971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.979 [2024-11-05 16:58:54.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.979 [2024-11-05 16:58:54.897309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.979 [2024-11-05 16:58:54.897326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.897332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.910592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.910609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.910615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.923695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.923712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.923719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.936582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.936606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.948186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.948206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.948214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.959771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.959788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.959794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.973613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.973630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.973637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:54.985904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:54.985920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:54.985927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:55.000273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:55.000290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:55.000296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:55.012697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:55.012714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:55.012721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:55.023257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:55.023274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:55.023280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.980 [2024-11-05 16:58:55.036120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:47.980 [2024-11-05 16:58:55.036137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.980 [2024-11-05 16:58:55.036144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.241 [2024-11-05 16:58:55.049012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.241 [2024-11-05 16:58:55.049030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.049037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.061787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.061805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.061811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.075375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.075392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.075398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.088720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.088738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.101564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.101581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.101588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.113645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.113662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.113668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.125468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.125485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.125491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.137925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.137941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.137948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.150330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.150348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.150354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.163021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.163038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.163048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.176554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.176571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.176578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.188550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.188566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.188573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.198749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.198766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.198772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.211783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.211800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.211807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.224836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.224853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.224859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.237657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.237674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.237681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.251458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.251475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.251481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.264098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.264114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.264121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.277474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.277495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.277501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.289970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.289987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.289993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.242 [2024-11-05 16:58:55.300858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.242 [2024-11-05 16:58:55.300875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.242 [2024-11-05 16:58:55.300882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.313756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.313773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.313780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.327533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.327551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.327557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.339117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.339135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.339142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.350855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.350873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.350879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.364973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.364990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.364996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.378434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.378452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.378459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.388795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.388813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.388820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.402353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.402371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.402377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.415665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.415683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.415690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.428808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.428825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.428831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.440361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.440379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.440386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.452206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.452223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.452230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.464860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.464877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.464883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.477668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.477685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.477691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.490736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.490761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.490767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.502935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.502952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.504 [2024-11-05 16:58:55.502958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.504 [2024-11-05 16:58:55.515539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.504 [2024-11-05 16:58:55.515556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.505 [2024-11-05 16:58:55.515563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.505 [2024-11-05 16:58:55.525570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.505 [2024-11-05 16:58:55.525588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.505 [2024-11-05 16:58:55.525594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.505 [2024-11-05 16:58:55.540505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.505 [2024-11-05 16:58:55.540523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.505 [2024-11-05 16:58:55.540529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.505 20102.00 IOPS, 78.52 MiB/s [2024-11-05T15:58:55.568Z] [2024-11-05 16:58:55.553600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.505 [2024-11-05 16:58:55.553615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.505 [2024-11-05 16:58:55.553622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.766 [2024-11-05 16:58:55.568102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.766 [2024-11-05 16:58:55.568120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.568126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.579358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.579374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.579381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.592079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.592097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.592104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.604132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.604149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.604156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.617964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.617982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.617989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.630838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.630856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.630863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.642513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.642531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.642537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.655949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.655966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.655972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.669764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.669780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.669787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.680776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.680793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.680799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.694008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.694026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.694032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.707858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.707875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.707885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.717227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.717245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.717252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.732081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.732098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.732105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.745917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.745934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.745941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.758072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.758095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.769299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.769315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.769321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.782294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.782311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.782317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.795456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.795473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.795479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.808375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.808393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.808399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.767 [2024-11-05 16:58:55.821377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:48.767 [2024-11-05 16:58:55.821397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.767 [2024-11-05 16:58:55.821403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.831066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.831083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.831090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.844582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.844600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.844606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.857525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.857542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.857549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.870580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.870598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.870604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.882521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.882539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.882545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.894977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.894995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.895001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.908842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.908858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.029 [2024-11-05 16:58:55.908864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.029 [2024-11-05 16:58:55.920258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.029 [2024-11-05 16:58:55.920275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.920284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:55.932583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:55.932600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:55.945314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:55.945331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.945337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:55.959167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:55.959185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.959191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:55.972846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:55.972863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.972870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:55.985749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:55.985767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.985773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:55.997080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:55.997098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:55.997104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.010542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.010560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.023042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.023058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.023065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.035190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.035209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.035216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.047662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.047679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.047685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.059321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.059338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.059345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.074039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.074056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.074062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.030 [2024-11-05 16:58:56.086395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.030 [2024-11-05 16:58:56.086412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.030 [2024-11-05 16:58:56.086419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.100191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.100208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.100215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.110563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.110580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.110587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.124018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.124035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.124042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.136957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.136974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.136981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.150349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.150366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.150372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.160553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.160570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.160577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.174208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.174224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.174230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.187488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.187505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.187511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.200258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.200275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.200282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.212856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.212873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.212880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.225335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.225352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.225359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.236398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.236416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.236422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.251618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.251635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.251648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.261329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.261346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.261352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.275367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.275384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.275390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.290063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.290080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.290086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.301559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.301575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.301582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.313604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.291 [2024-11-05 16:58:56.313621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.291 [2024-11-05 16:58:56.313627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.291 [2024-11-05 16:58:56.326956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.292 [2024-11-05 16:58:56.326972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-11-05 16:58:56.326979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.292 [2024-11-05 16:58:56.340807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.292 [2024-11-05 16:58:56.340824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-11-05 16:58:56.340831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.292 [2024-11-05 16:58:56.350860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.292 [2024-11-05 16:58:56.350878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.292 [2024-11-05 16:58:56.350884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.365011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.365033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.365039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.379114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.379131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.379137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.391168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.391184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.391191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.403896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.403913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.403919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.416214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.416231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.416237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.430023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.430040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.430047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.441687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.441705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.441712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.454910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.454928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.454934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.466875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.466891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.466898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.479385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.479402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.479408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.489176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.489192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.489199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.503642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.503660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.503666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.516578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.516595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.516601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.530273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.530290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.530296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.542949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.542965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.542972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 [2024-11-05 16:58:56.554115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ea6e0) 00:34:49.553 [2024-11-05 16:58:56.554132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.553 [2024-11-05 16:58:56.554139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.553 20148.50 IOPS, 78.71 MiB/s 00:34:49.553 Latency(us) 00:34:49.553 [2024-11-05T15:58:56.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:49.553 nvme0n1 : 2.01 20167.02 78.78 0.00 0.00 6339.71 1993.39 17694.72 00:34:49.553 [2024-11-05T15:58:56.616Z] =================================================================================================================== 00:34:49.553 [2024-11-05T15:58:56.616Z] Total : 20167.02 78.78 0.00 0.00 6339.71 1993.39 17694.72 00:34:49.553 { 00:34:49.553 "results": [ 00:34:49.553 { 00:34:49.553 "job": "nvme0n1", 00:34:49.553 "core_mask": "0x2", 00:34:49.553 "workload": "randread", 00:34:49.553 "status": "finished", 00:34:49.553 "queue_depth": 128, 00:34:49.553 "io_size": 4096, 00:34:49.553 "runtime": 2.005056, 00:34:49.553 "iops": 20167.01777905455, 00:34:49.553 "mibps": 78.77741319943183, 00:34:49.554 "io_failed": 0, 00:34:49.554 "io_timeout": 0, 00:34:49.554 "avg_latency_us": 6339.710889966037, 00:34:49.554 "min_latency_us": 1993.3866666666668, 00:34:49.554 "max_latency_us": 17694.72 00:34:49.554 } 00:34:49.554 ], 00:34:49.554 "core_count": 1 00:34:49.554 } 00:34:49.554 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:49.554 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:49.554 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:49.554 | .driver_specific 00:34:49.554 | .nvme_error 00:34:49.554 | .status_code 00:34:49.554 | .command_transient_transport_error' 00:34:49.554 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3353263 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3353263 ']' 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3353263 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3353263 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3353263' 00:34:49.814 killing process with pid 3353263 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3353263 00:34:49.814 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.814 00:34:49.814 Latency(us) 00:34:49.814 [2024-11-05T15:58:56.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.814 [2024-11-05T15:58:56.877Z] =================================================================================================================== 00:34:49.814 [2024-11-05T15:58:56.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.814 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3353263 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3353955 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3353955 /var/tmp/bperf.sock 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3353955 ']' 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:50.075 16:58:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.075 [2024-11-05 16:58:56.979491] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:50.076 [2024-11-05 16:58:56.979567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353955 ] 00:34:50.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:50.076 Zero copy mechanism will not be used. 00:34:50.076 [2024-11-05 16:58:57.064323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.076 [2024-11-05 16:58:57.093685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.017 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.018 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.018 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.018 16:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.277 nvme0n1 00:34:51.278 16:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:51.278 16:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.278 16:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.278 16:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.278 16:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:51.278 16:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.539 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.539 Zero copy mechanism will not be used. 00:34:51.539 Running I/O for 2 seconds... 00:34:51.539 [2024-11-05 16:58:58.440815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.440854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.440864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.447879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.447901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.447908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.453239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.453258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.453265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.464178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.464196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.464203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.474213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.474230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.474237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.478760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.478777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.488125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.488144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.488150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.496517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.496536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.496542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.505652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.505671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.505678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.513852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.513871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.513877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.525992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.526011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.526017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.536718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.536737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.536743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.544704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.544722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.544728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.553578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.553596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.553602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.563375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.563393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.563400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.573642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.573660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.573666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.585446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.585464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.585470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.539 [2024-11-05 16:58:58.595276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.539 [2024-11-05 16:58:58.595294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.539 [2024-11-05 16:58:58.595304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.605711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.605729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.605735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.616060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.616078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.616084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.627023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.627040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.627047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.635479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.635497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.635503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.643138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.643155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.643161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.651774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.651792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.651798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.661298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.661316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.661323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.670036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.670055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.670061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.678805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.678826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.678832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.686869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.800 [2024-11-05 16:58:58.686887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.800 [2024-11-05 16:58:58.686894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.800 [2024-11-05 16:58:58.694696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.694713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.694720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.699962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.699982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.699989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.711682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.711700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.711706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.721830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.721854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.733096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.733114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.741435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.741453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.741459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.750776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.750793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.750799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.759881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.759899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.759905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.769185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.769203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.769209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.776474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.776498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.785996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.786014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.786020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.795481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.795499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.795505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.805351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.805369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.814347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.814365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.814372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.825614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.825632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.825639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.833236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.833259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.833266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.843109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.843127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.843133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.850628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.850647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.850655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.801 [2024-11-05 16:58:58.859056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:51.801 [2024-11-05 16:58:58.859074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.801 [2024-11-05 16:58:58.859080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.062 [2024-11-05 16:58:58.870928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.062 [2024-11-05 16:58:58.870946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.062 [2024-11-05 16:58:58.870953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.062 [2024-11-05 16:58:58.883692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.062 [2024-11-05 16:58:58.883710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.883717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.894393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.894411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.899822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.899840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.899846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.909342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.909361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.909367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.918405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.918424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.918430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.924113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.924131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.924137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.933150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.933168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.933174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.942206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.942224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.942230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.953378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.953396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.953402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.960789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.960808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.960814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.969761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.969778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.969784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.976576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.976594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.976601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.982123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.982141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.982150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.987500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.987518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.987524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:58.994987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:58.995005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:58.995011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.003726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.003744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.003755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.012204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.012223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.012229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.022761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.022779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.022785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.027686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.027705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.027711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.032810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.032827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.032833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.037824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.037842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.037848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.049699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.049720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.049727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.057540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.057559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.057565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.063135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.063154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.063160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.068251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.068270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.068276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.073319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.073338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.073345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.082185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.063 [2024-11-05 16:58:59.082202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.063 [2024-11-05 16:58:59.082208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.063 [2024-11-05 16:58:59.092147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.064 [2024-11-05 16:58:59.092166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.064 [2024-11-05 16:58:59.092172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.064 [2024-11-05 16:58:59.102937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.064 [2024-11-05 16:58:59.102955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.064 [2024-11-05 16:58:59.102961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.064 [2024-11-05 16:58:59.114260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.064 [2024-11-05 16:58:59.114278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.064 [2024-11-05 16:58:59.114284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.127682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.127700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.127706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.140414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.140432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.153325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.153343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.153350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.165453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.165471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.165477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.178744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.178767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.178773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.191867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.191886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.191892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.204661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.204680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.204686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.216134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.216159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.222877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.222895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.222904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.234786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.325 [2024-11-05 16:58:59.234805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.325 [2024-11-05 16:58:59.234811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.325 [2024-11-05 16:58:59.242884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.242902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.242908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.248611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.248635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.256841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.256858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.256864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.265212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.265230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.265236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.272101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.272118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.277030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.277049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.277055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.287819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.287837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.287843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.295781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.295802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.295808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.303711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.303729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.303735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.313285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.313304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.313310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.320885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.320903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.320909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.328603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.328621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.328627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.340577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.340595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.340602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.348624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.348642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.348648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.356731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.356754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.356760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.365651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.365669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.365679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.374885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.374903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.374909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.382567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.382585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.382591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.326 [2024-11-05 16:58:59.387630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.326 [2024-11-05 16:58:59.387648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.326 [2024-11-05 16:58:59.387654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.588 [2024-11-05 16:58:59.396140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.588 [2024-11-05 16:58:59.396159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.588 [2024-11-05 16:58:59.396165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.588 [2024-11-05 16:58:59.403059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.588 [2024-11-05 16:58:59.403077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.588 [2024-11-05 16:58:59.403083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.588 [2024-11-05 16:58:59.411796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.588 [2024-11-05 16:58:59.411813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.588 [2024-11-05 16:58:59.411819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.588 [2024-11-05 16:58:59.422228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.588 [2024-11-05 16:58:59.422247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.588 [2024-11-05 16:58:59.422253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.588 3469.00 IOPS, 433.62 MiB/s [2024-11-05T15:58:59.651Z] [2024-11-05 16:58:59.432608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.432627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.432634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.440973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.440995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.441001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.447592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.447610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.447617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.452722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.452740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.452751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.462381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.462400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.462406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.471859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.471878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.471885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.481580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.481599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.481605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.492242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.492260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.492267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.500127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.500145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.500152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.509628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.509647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.509653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.521465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.521484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.521490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.528895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.528914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.528920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.540026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.540044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.540050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.543411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.543428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.543434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.553214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.553232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.553238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.560416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.560434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.560441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.570379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.570398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.570404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.576699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.576717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.576723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.587288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.587306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.587316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.596857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.596874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.596880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.606178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.606196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.606202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.611571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.611589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.611596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.620685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.620709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.630576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.630594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.630601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.639805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.639823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.639830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.589 [2024-11-05 16:58:59.650302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.589 [2024-11-05 16:58:59.650320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.589 [2024-11-05 16:58:59.650326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.661148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.661166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.661172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.673488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.673508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.673514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.686906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.686924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.686930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.699138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.699156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.699162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.711695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.711713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.711719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.723255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.723273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.723279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.731678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.731697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.731703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.739822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.739840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.739846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.746027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.746045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.746051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.757507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.757525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.757531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.769601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.769619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.769625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.782614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.782633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.782639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.796157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.796175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.796182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.808595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.808613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.808619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.820740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.850 [2024-11-05 16:58:59.820762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.850 [2024-11-05 16:58:59.820769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.850 [2024-11-05 16:58:59.831752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.831769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.851 [2024-11-05 16:58:59.842310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.842328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.842334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.851 [2024-11-05 16:58:59.854609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.854627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.854633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.851 [2024-11-05 16:58:59.866818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.866839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.851 [2024-11-05 16:58:59.879776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.879793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.879800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.851 [2024-11-05 16:58:59.890825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.890842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.890849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.851 [2024-11-05 16:58:59.902549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:52.851 [2024-11-05 16:58:59.902567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.851 [2024-11-05 16:58:59.902573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.914097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.914115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.914121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.923725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.923743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.923754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.934933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.934951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.934957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.944101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.944119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.944125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.952262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.952280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.952287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.962479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.962497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.962503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.972987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.973005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.973011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.984745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.984772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.984779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:58:59.994569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:58:59.994587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:58:59.994593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.007253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.007272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:59:00.007279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.018401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.018420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:59:00.018426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.024468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.024488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:59:00.024496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.037386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.037404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:59:00.037410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.045193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:59:00.045221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.055872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.055890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.112 [2024-11-05 16:59:00.055896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.112 [2024-11-05 16:59:00.061040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.112 [2024-11-05 16:59:00.061058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.061064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.065960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.065978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.065985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.071104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.071122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.071128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.076207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.076225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.076231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.081097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.081114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.081121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.086277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.086295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.086302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.091385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.091402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.091408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.101660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.101681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.101687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.108606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.108624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.108631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.118100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.118118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.118124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.125239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.125257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.125264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.134407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.134425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.134431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.144832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.144850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.144857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.154512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.154530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.154536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.166530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.166547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.166554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.113 [2024-11-05 16:59:00.172303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.113 [2024-11-05 16:59:00.172321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.113 [2024-11-05 16:59:00.172327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.179206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.179225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.179231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.185144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.185162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.185168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.193433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.193451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.193457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.202210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.202228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.202235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.209452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.209469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.209475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.216570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.216588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.216594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.223452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.223469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.223476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.230855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.230879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.235984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.236000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.236010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.241170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.241189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.241195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.246734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.246759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.246766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.251783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.251801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.251807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.257105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.257123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.257129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.265124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.265142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.265149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.270278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.270295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.270301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.275405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.275423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.275429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.285779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.285797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.285803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.292331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.292348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.292354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.301707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.301725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.301731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.313724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.313741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.313752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.324602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.324619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.324625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.333175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.333193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.333199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.344668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.344685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.344691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.352433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.352451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.352457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.362770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.362788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-11-05 16:59:00.362795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.375 [2024-11-05 16:59:00.374422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.375 [2024-11-05 16:59:00.374440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.374449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.376 [2024-11-05 16:59:00.384851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.376 [2024-11-05 16:59:00.384868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.384874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.376 [2024-11-05 16:59:00.394479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.376 [2024-11-05 16:59:00.394497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.394503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.376 [2024-11-05 16:59:00.404913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.376 [2024-11-05 16:59:00.404931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.404937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.376 [2024-11-05 16:59:00.414632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.376 [2024-11-05 16:59:00.414649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.414656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.376 [2024-11-05 16:59:00.422685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.376 [2024-11-05 16:59:00.422702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.422708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.376 [2024-11-05 16:59:00.427825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131d9a0) 00:34:53.376 [2024-11-05 16:59:00.427843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-11-05 16:59:00.427849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.376 3446.50 IOPS, 430.81 MiB/s 00:34:53.376 Latency(us) 00:34:53.376 [2024-11-05T15:59:00.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.376 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:53.376 nvme0n1 : 2.00 3450.30 431.29 0.00 0.00 4634.76 839.68 13762.56 00:34:53.376 [2024-11-05T15:59:00.439Z] =================================================================================================================== 00:34:53.376 [2024-11-05T15:59:00.439Z] Total : 3450.30 431.29 0.00 0.00 4634.76 839.68 13762.56 00:34:53.376 { 00:34:53.376 "results": [ 00:34:53.376 { 00:34:53.376 "job": "nvme0n1", 00:34:53.376 "core_mask": "0x2", 00:34:53.376 "workload": "randread", 00:34:53.376 "status": "finished", 00:34:53.376 "queue_depth": 16, 00:34:53.376 "io_size": 131072, 00:34:53.376 "runtime": 2.002432, 00:34:53.376 "iops": 3450.304429813347, 00:34:53.376 "mibps": 431.28805372666835, 00:34:53.376 "io_failed": 0, 00:34:53.376 "io_timeout": 0, 00:34:53.376 "avg_latency_us": 4634.755069233367, 00:34:53.376 "min_latency_us": 839.68, 00:34:53.376 "max_latency_us": 13762.56 00:34:53.376 } 00:34:53.376 ], 00:34:53.376 "core_count": 1 00:34:53.376 } 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:53.637 | .driver_specific 00:34:53.637 | .nvme_error 00:34:53.637 | .status_code 00:34:53.637 | .command_transient_transport_error' 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3353955 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3353955 ']' 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3353955 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3353955 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:53.637 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3353955' 00:34:53.638 killing process with pid 3353955 00:34:53.638 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3353955 00:34:53.638 Received shutdown signal, test time was about 2.000000 seconds 00:34:53.638 00:34:53.638 Latency(us) 00:34:53.638 [2024-11-05T15:59:00.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.638 [2024-11-05T15:59:00.701Z] =================================================================================================================== 00:34:53.638 [2024-11-05T15:59:00.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.638 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3353955 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3354712 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3354712 /var/tmp/bperf.sock 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3354712 ']' 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:53.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:53.899 16:59:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:53.899 [2024-11-05 16:59:00.847337] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:53.899 [2024-11-05 16:59:00.847396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354712 ] 00:34:53.899 [2024-11-05 16:59:00.931252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.899 [2024-11-05 16:59:00.960606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.840 16:59:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.411 nvme0n1 00:34:55.411 16:59:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:55.411 16:59:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.411 16:59:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 16:59:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.411 16:59:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:55.411 16:59:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.411 Running I/O for 2 seconds... 00:34:55.411 [2024-11-05 16:59:02.360530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb760 00:34:55.411 [2024-11-05 16:59:02.362169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.370183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f2d80 00:34:55.411 [2024-11-05 16:59:02.371147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.371170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.383231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e5220 00:34:55.411 [2024-11-05 16:59:02.384345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.384362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.395279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e4140 00:34:55.411 [2024-11-05 16:59:02.396374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.396391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.407243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e3060 00:34:55.411 [2024-11-05 16:59:02.408354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.408370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.419225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fb048 00:34:55.411 [2024-11-05 16:59:02.420311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.420327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.432948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:55.411 [2024-11-05 16:59:02.434689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.434705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.442964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f5be8 00:34:55.411 [2024-11-05 16:59:02.444215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.444231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.455664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f5be8 00:34:55.411 [2024-11-05 16:59:02.456904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.456920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:55.411 [2024-11-05 16:59:02.467588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f5be8 00:34:55.411 [2024-11-05 16:59:02.468863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.411 [2024-11-05 16:59:02.468879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.478731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6890 00:34:55.672 [2024-11-05 16:59:02.479976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.479991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.491413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6890 00:34:55.672 [2024-11-05 16:59:02.492658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.492674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.503353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6890 00:34:55.672 [2024-11-05 16:59:02.504602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.504618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.515287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6890 00:34:55.672 [2024-11-05 16:59:02.516501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.516517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.528732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6890 00:34:55.672 [2024-11-05 16:59:02.530576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.530592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.538346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:55.672 [2024-11-05 16:59:02.539574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.539590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.551028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:55.672 [2024-11-05 16:59:02.552261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.552277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.562944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:55.672 [2024-11-05 16:59:02.564181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.564197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.574875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:55.672 [2024-11-05 16:59:02.576087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.576103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.586802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:55.672 [2024-11-05 16:59:02.588040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.588055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.598724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:55.672 [2024-11-05 16:59:02.599955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.599971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.672 [2024-11-05 16:59:02.610618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f5378 00:34:55.672 [2024-11-05 16:59:02.611862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.672 [2024-11-05 16:59:02.611877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.622581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f4298 00:34:55.673 [2024-11-05 16:59:02.623826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.634532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166df118 00:34:55.673 [2024-11-05 16:59:02.635787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.635803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.646503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166de038 00:34:55.673 [2024-11-05 16:59:02.647699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.647715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.658435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6890 00:34:55.673 [2024-11-05 16:59:02.659667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.659683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.669579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f1ca0 00:34:55.673 [2024-11-05 16:59:02.670784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.670800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.682296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e23b8 00:34:55.673 [2024-11-05 16:59:02.683515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.683534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.693427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f35f0 00:34:55.673 [2024-11-05 16:59:02.694620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.694635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.706118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f35f0 00:34:55.673 [2024-11-05 16:59:02.707339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.707355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.718040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f35f0 00:34:55.673 [2024-11-05 16:59:02.719222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:55.673 [2024-11-05 16:59:02.729961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f35f0 00:34:55.673 [2024-11-05 16:59:02.731172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.673 [2024-11-05 16:59:02.731188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.741069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e1b48 00:34:55.934 [2024-11-05 16:59:02.742261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.742276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.755927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eaef0 00:34:55.934 [2024-11-05 16:59:02.757942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.757958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.766302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:55.934 [2024-11-05 16:59:02.767656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.767671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.779759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:55.934 [2024-11-05 16:59:02.781757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.781772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.790115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ea680 00:34:55.934 [2024-11-05 16:59:02.791467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.791486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.802049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ea680 00:34:55.934 [2024-11-05 16:59:02.803398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.803415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.813977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ea680 00:34:55.934 [2024-11-05 16:59:02.815284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.815300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.827442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166df550 00:34:55.934 [2024-11-05 16:59:02.829433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.829449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.837803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e01f8 00:34:55.934 [2024-11-05 16:59:02.839140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.839155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.849710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e01f8 00:34:55.934 [2024-11-05 16:59:02.851026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.851042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.861613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f46d0 00:34:55.934 [2024-11-05 16:59:02.862934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.862949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.873553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f6cc8 00:34:55.934 [2024-11-05 16:59:02.874894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.874910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.885522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ea680 00:34:55.934 [2024-11-05 16:59:02.886808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.886824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.896593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:55.934 [2024-11-05 16:59:02.897898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.909279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:55.934 [2024-11-05 16:59:02.910589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.910605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.921206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:55.934 [2024-11-05 16:59:02.922515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.922531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.933129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:55.934 [2024-11-05 16:59:02.934440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.934 [2024-11-05 16:59:02.934456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:55.934 [2024-11-05 16:59:02.945056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:55.934 [2024-11-05 16:59:02.946359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.935 [2024-11-05 16:59:02.946374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:55.935 [2024-11-05 16:59:02.956177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7da8 00:34:55.935 [2024-11-05 16:59:02.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.935 [2024-11-05 16:59:02.957481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:55.935 [2024-11-05 16:59:02.968884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7da8 00:34:55.935 [2024-11-05 16:59:02.970198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.935 [2024-11-05 16:59:02.970214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:55.935 [2024-11-05 16:59:02.980820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7da8 00:34:55.935 [2024-11-05 16:59:02.982142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.935 [2024-11-05 16:59:02.982158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:55.935 [2024-11-05 16:59:02.992735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7da8 00:34:55.935 [2024-11-05 16:59:02.994012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.935 [2024-11-05 16:59:02.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:56.195 [2024-11-05 16:59:03.004675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:56.195 [2024-11-05 16:59:03.005962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.195 [2024-11-05 16:59:03.005977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:56.195 [2024-11-05 16:59:03.018167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eaef0 00:34:56.195 [2024-11-05 16:59:03.020097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.195 [2024-11-05 16:59:03.020113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:56.195 [2024-11-05 16:59:03.028930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6738 00:34:56.195 [2024-11-05 16:59:03.030381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.195 [2024-11-05 16:59:03.030397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:56.195 [2024-11-05 16:59:03.041242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e8088 00:34:56.196 [2024-11-05 16:59:03.042678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.042694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.052171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fb8b8 00:34:56.196 [2024-11-05 16:59:03.053152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.053168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.064916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eea00 00:34:56.196 [2024-11-05 16:59:03.066521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.066537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.075279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:56.196 [2024-11-05 16:59:03.076197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.076213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.088760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e01f8 00:34:56.196 [2024-11-05 16:59:03.090350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.090366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.098453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166edd58 00:34:56.196 [2024-11-05 16:59:03.099391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.099409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.111244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:56.196 [2024-11-05 16:59:03.112212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.112227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.122414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f1430 00:34:56.196 [2024-11-05 16:59:03.123358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.123374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.136611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f1430 00:34:56.196 [2024-11-05 16:59:03.138195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.138210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.147038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e8d30 00:34:56.196 [2024-11-05 16:59:03.147991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.148007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.158944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ee5c8 00:34:56.196 [2024-11-05 16:59:03.159862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.159878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.170868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fdeb0 00:34:56.196 [2024-11-05 16:59:03.171826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.171842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.182808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fa7d8 00:34:56.196 [2024-11-05 16:59:03.183768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.183784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.193968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0350 00:34:56.196 [2024-11-05 16:59:03.194897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.194912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.206655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0350 00:34:56.196 [2024-11-05 16:59:03.207608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.207624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.218602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0350 00:34:56.196 [2024-11-05 16:59:03.219537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.219553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.230503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0350 00:34:56.196 [2024-11-05 16:59:03.231444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.231460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.242405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0350 00:34:56.196 [2024-11-05 16:59:03.243344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.243360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.196 [2024-11-05 16:59:03.253527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e12d8 00:34:56.196 [2024-11-05 16:59:03.254446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.196 [2024-11-05 16:59:03.254462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.266174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ee190 00:34:56.457 [2024-11-05 16:59:03.267074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.267090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.278102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e0a68 00:34:56.457 [2024-11-05 16:59:03.278997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.279012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.289995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ea680 00:34:56.457 [2024-11-05 16:59:03.290904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.290919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.301983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7da8 00:34:56.457 [2024-11-05 16:59:03.302845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.302861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.313026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7538 00:34:56.457 [2024-11-05 16:59:03.313912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.313927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.325689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7538 00:34:56.457 [2024-11-05 16:59:03.326590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.326606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.336796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.457 [2024-11-05 16:59:03.337672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.337688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:56.457 21296.00 IOPS, 83.19 MiB/s [2024-11-05T15:59:03.520Z] [2024-11-05 16:59:03.349462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.457 [2024-11-05 16:59:03.350350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.350365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.361362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.457 [2024-11-05 16:59:03.362251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.362266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.373247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.457 [2024-11-05 16:59:03.374136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.457 [2024-11-05 16:59:03.374153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:56.457 [2024-11-05 16:59:03.385157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.458 [2024-11-05 16:59:03.386045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.397079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.458 [2024-11-05 16:59:03.397964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.397979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.410514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f57b0 00:34:56.458 [2024-11-05 16:59:03.412070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.412088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.421320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e12d8 00:34:56.458 [2024-11-05 16:59:03.422323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.422338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.434954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f92c0 00:34:56.458 [2024-11-05 16:59:03.436646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.436661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.445285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fc128 00:34:56.458 [2024-11-05 16:59:03.446323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.446338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.457212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e12d8 00:34:56.458 [2024-11-05 16:59:03.458194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.458210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.470695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ef6a8 00:34:56.458 [2024-11-05 16:59:03.472383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.472398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.482566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e4de8 00:34:56.458 [2024-11-05 16:59:03.484238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.484253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.493314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eaef0 00:34:56.458 [2024-11-05 16:59:03.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.494503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.505409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e38d0 00:34:56.458 [2024-11-05 16:59:03.506585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.506601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:56.458 [2024-11-05 16:59:03.517370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ebb98 00:34:56.458 [2024-11-05 16:59:03.518574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.458 [2024-11-05 16:59:03.518590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:56.719 [2024-11-05 16:59:03.529275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e3498 00:34:56.719 [2024-11-05 16:59:03.530459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.719 [2024-11-05 16:59:03.530474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:56.719 [2024-11-05 16:59:03.542695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e3498 00:34:56.719 [2024-11-05 16:59:03.544516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.719 [2024-11-05 16:59:03.544533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:56.719 [2024-11-05 16:59:03.552337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e38d0 00:34:56.719 [2024-11-05 16:59:03.553502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.719 [2024-11-05 16:59:03.553519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:56.719 [2024-11-05 16:59:03.564995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e38d0 00:34:56.719 [2024-11-05 16:59:03.566167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.719 [2024-11-05 16:59:03.566182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.719 [2024-11-05 16:59:03.576904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e38d0 00:34:56.719 [2024-11-05 16:59:03.578074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.719 [2024-11-05 16:59:03.578089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.719 [2024-11-05 16:59:03.588797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166eb328 00:34:56.719 [2024-11-05 16:59:03.589982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.589997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.600731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f35f0 00:34:56.720 [2024-11-05 16:59:03.601889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.601904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.611937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e3d08 00:34:56.720 [2024-11-05 16:59:03.613093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.613108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.623830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e5ec8 00:34:56.720 [2024-11-05 16:59:03.624987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.625003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.638619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fac10 00:34:56.720 [2024-11-05 16:59:03.640588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.640603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.648980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f4b08 00:34:56.720 [2024-11-05 16:59:03.650296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.650312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.660893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f4b08 00:34:56.720 [2024-11-05 16:59:03.662211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.662227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.672827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f4b08 00:34:56.720 [2024-11-05 16:59:03.674137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.674152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.684712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f4b08 00:34:56.720 [2024-11-05 16:59:03.685982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.685999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.696012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e9e10 00:34:56.720 [2024-11-05 16:59:03.697307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.697322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.708031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e3498 00:34:56.720 [2024-11-05 16:59:03.709052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.709068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.720760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f92c0 00:34:56.720 [2024-11-05 16:59:03.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.722370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.730366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f5378 00:34:56.720 [2024-11-05 16:59:03.731331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.731347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.742229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e1f80 00:34:56.720 [2024-11-05 16:59:03.743189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.743204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.754071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ef270 00:34:56.720 [2024-11-05 16:59:03.755024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.755040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.768248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ef270 00:34:56.720 [2024-11-05 16:59:03.769810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.769826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:56.720 [2024-11-05 16:59:03.777854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.720 [2024-11-05 16:59:03.778794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.720 [2024-11-05 16:59:03.778810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.790520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.791472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.791489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.802421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.803376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.803392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.814334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.815282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.815298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.826240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.827189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.827208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.838141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.839085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.839101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.850052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.850999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.861960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.862922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.862938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.873874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.874822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.874837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.885787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.886732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.886753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.897686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.898637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.898653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.909593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.910534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.910550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.921524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.922462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.922478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.933494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f81e0 00:34:56.981 [2024-11-05 16:59:03.934399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.934416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.945344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0ff8 00:34:56.981 [2024-11-05 16:59:03.946285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.946301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.957267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0ff8 00:34:56.981 [2024-11-05 16:59:03.958204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.958220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.969196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0ff8 00:34:56.981 [2024-11-05 16:59:03.970141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.970157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.982633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f0ff8 00:34:56.981 [2024-11-05 16:59:03.984193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.984209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:03.993037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f9f68 00:34:56.981 [2024-11-05 16:59:03.993937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:03.993953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:04.004944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e1b48 00:34:56.981 [2024-11-05 16:59:04.005868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:04.005884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:04.016894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166ec408 00:34:56.981 [2024-11-05 16:59:04.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:04.017827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:04.030353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fef90 00:34:56.981 [2024-11-05 16:59:04.031933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:04.031949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:56.981 [2024-11-05 16:59:04.041346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f8a50 00:34:56.981 [2024-11-05 16:59:04.042444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.981 [2024-11-05 16:59:04.042461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.242 [2024-11-05 16:59:04.054993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e7c50 00:34:57.242 [2024-11-05 16:59:04.056732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.242 [2024-11-05 16:59:04.056750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:57.242 [2024-11-05 16:59:04.065347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f8e88 00:34:57.242 [2024-11-05 16:59:04.066434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.242 [2024-11-05 16:59:04.066450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.242 [2024-11-05 16:59:04.077249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f8e88 00:34:57.242 [2024-11-05 16:59:04.078342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.242 [2024-11-05 16:59:04.078360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.242 [2024-11-05 16:59:04.088356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fdeb0 00:34:57.242 [2024-11-05 16:59:04.089429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.242 [2024-11-05 16:59:04.089445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:57.242 [2024-11-05 16:59:04.101031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fdeb0 00:34:57.242 [2024-11-05 16:59:04.102108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.242 [2024-11-05 16:59:04.102125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.242 [2024-11-05 16:59:04.113060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fdeb0 00:34:57.242 [2024-11-05 16:59:04.114110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.242 [2024-11-05 16:59:04.114126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.126612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f8e88 00:34:57.243 [2024-11-05 16:59:04.128348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.128363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.137388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e5ec8 00:34:57.243 [2024-11-05 16:59:04.138623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.138643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.149454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:57.243 [2024-11-05 16:59:04.150690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.150707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.161365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:57.243 [2024-11-05 16:59:04.162602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.162619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.173310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:57.243 [2024-11-05 16:59:04.174540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.174556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.185240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:57.243 [2024-11-05 16:59:04.186481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.197167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:57.243 [2024-11-05 16:59:04.198400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.198417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.209063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166fda78 00:34:57.243 [2024-11-05 16:59:04.210294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.210310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.220199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6300 00:34:57.243 [2024-11-05 16:59:04.221412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.221429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.232868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6300 00:34:57.243 [2024-11-05 16:59:04.234087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.234104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.244805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6300 00:34:57.243 [2024-11-05 16:59:04.246026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.246043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.256724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6300 00:34:57.243 [2024-11-05 16:59:04.257958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.257975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.268638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6300 00:34:57.243 [2024-11-05 16:59:04.269860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.269876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.280547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e6300 00:34:57.243 [2024-11-05 16:59:04.281774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.281791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.294179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f2d80 00:34:57.243 [2024-11-05 16:59:04.296053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.296070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.243 [2024-11-05 16:59:04.304164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f9f68 00:34:57.243 [2024-11-05 16:59:04.305543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.243 [2024-11-05 16:59:04.305560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.504 [2024-11-05 16:59:04.318505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e99d8 00:34:57.504 [2024-11-05 16:59:04.320550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.504 [2024-11-05 16:59:04.320566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.504 [2024-11-05 16:59:04.328944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166e3498 00:34:57.504 [2024-11-05 16:59:04.330333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.504 [2024-11-05 16:59:04.330350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.504 [2024-11-05 16:59:04.342434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51520) with pdu=0x2000166f7970 00:34:57.504 [2024-11-05 16:59:04.344457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.504 [2024-11-05 16:59:04.344474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.504 21350.00 IOPS, 83.40 MiB/s 00:34:57.504 Latency(us) 00:34:57.504 [2024-11-05T15:59:04.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.504 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.504 nvme0n1 : 2.00 21366.93 83.46 0.00 0.00 5983.83 2266.45 14090.24 00:34:57.504 [2024-11-05T15:59:04.567Z] =================================================================================================================== 00:34:57.504 [2024-11-05T15:59:04.567Z] Total : 21366.93 83.46 0.00 0.00 5983.83 2266.45 14090.24 00:34:57.504 { 00:34:57.504 "results": [ 00:34:57.504 { 00:34:57.504 "job": "nvme0n1", 00:34:57.504 "core_mask": "0x2", 00:34:57.504 "workload": "randwrite", 00:34:57.504 "status": "finished", 00:34:57.504 "queue_depth": 128, 00:34:57.504 "io_size": 4096, 00:34:57.504 "runtime": 2.004406, 00:34:57.504 "iops": 21366.928656170458, 00:34:57.504 "mibps": 83.46456506316585, 00:34:57.504 "io_failed": 0, 00:34:57.504 "io_timeout": 0, 00:34:57.504 "avg_latency_us": 5983.825499828772, 00:34:57.504 "min_latency_us": 2266.4533333333334, 00:34:57.504 "max_latency_us": 14090.24 00:34:57.504 } 00:34:57.504 ], 00:34:57.504 "core_count": 1 00:34:57.504 } 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:57.504 | .driver_specific 00:34:57.504 | .nvme_error 00:34:57.504 | .status_code 00:34:57.504 | .command_transient_transport_error' 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3354712 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3354712 ']' 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3354712 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:57.504 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3354712 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3354712' 00:34:57.765 killing process with pid 3354712 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3354712 00:34:57.765 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.765 00:34:57.765 Latency(us) 00:34:57.765 [2024-11-05T15:59:04.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.765 [2024-11-05T15:59:04.828Z] =================================================================================================================== 00:34:57.765 [2024-11-05T15:59:04.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3354712 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3355592 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3355592 /var/tmp/bperf.sock 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3355592 ']' 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:57.765 16:59:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.765 [2024-11-05 16:59:04.779482] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:34:57.765 [2024-11-05 16:59:04.779545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355592 ] 00:34:57.765 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.765 Zero copy mechanism will not be used. 00:34:58.026 [2024-11-05 16:59:04.860652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.026 [2024-11-05 16:59:04.890149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.595 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:58.595 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:58.595 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.595 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.854 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:58.854 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.854 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.854 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.854 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.855 16:59:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.116 nvme0n1 00:34:59.116 16:59:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:59.116 16:59:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.116 16:59:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.116 16:59:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.116 16:59:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:59.116 16:59:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.116 Zero copy mechanism will not be used. 00:34:59.116 Running I/O for 2 seconds... 00:34:59.377 [2024-11-05 16:59:06.184633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.377 [2024-11-05 16:59:06.184994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.377 [2024-11-05 16:59:06.185023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.377 [2024-11-05 16:59:06.196140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.377 [2024-11-05 16:59:06.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.377 [2024-11-05 16:59:06.196420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.377 [2024-11-05 16:59:06.208239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.377 [2024-11-05 16:59:06.208607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.377 [2024-11-05 16:59:06.208626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.377 [2024-11-05 16:59:06.218579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.377 [2024-11-05 16:59:06.218995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.377 [2024-11-05 16:59:06.219014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.377 [2024-11-05 16:59:06.226523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.377 [2024-11-05 16:59:06.226726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.377 [2024-11-05 16:59:06.226743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.377 [2024-11-05 16:59:06.232778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.377 [2024-11-05 16:59:06.232983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.377 [2024-11-05 16:59:06.233000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.377 [2024-11-05 16:59:06.239426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.239864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.239883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.248373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.248714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.248736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.258152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.258448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.258465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.267947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.268147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.268162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.277354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.277751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.277769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.285822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.286154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.286172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.296766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.297079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.297097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.305737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.306045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.306062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.314265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.314588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.314606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.323596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.323907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.323925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.332904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.333207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.333224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.341183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.341488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.341506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.347833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.348033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.348051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.355918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.365914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.366195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.366211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.374111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.374443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.374461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.381694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.382051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.389378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.389687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.389705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.397651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.397968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.397989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.406132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.406481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.406498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.415522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.415839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.415857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.424857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.425158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.425175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.432703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.432915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.378 [2024-11-05 16:59:06.440289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.378 [2024-11-05 16:59:06.440599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.378 [2024-11-05 16:59:06.440618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.446851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.447200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.447220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.452104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.452303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.452321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.458284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.458689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.458708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.463153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.463356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.470525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.470737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.470759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.477480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.477680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.477697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.481779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.482003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.482020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.486034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.486235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.486251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.490322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.640 [2024-11-05 16:59:06.490523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.640 [2024-11-05 16:59:06.490540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.640 [2024-11-05 16:59:06.498563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.498778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.498795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.502930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.503130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.503147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.506887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.507087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.507103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.510847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.511047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.511063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.518985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.519334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.519352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.526845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.527150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.527168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.533736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.533945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.533962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.538368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.538568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.538584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.547448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.547813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.547831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.555116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.555317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.555333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.562801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.563001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.563018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.570042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.570242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.570262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.575757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.576092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.582879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.583078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.583096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.590461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.590782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.590799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.595358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.595556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.595573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.599558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.599760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.599777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.603844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.604043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.604059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.611909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.612215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.612233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.621648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.622008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.622025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.631076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.631282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.631298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.635969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.636185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.645012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.645323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.645340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.654654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.654967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.654985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.664766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.641 [2024-11-05 16:59:06.665022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.641 [2024-11-05 16:59:06.665039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.641 [2024-11-05 16:59:06.673775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.642 [2024-11-05 16:59:06.674123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.642 [2024-11-05 16:59:06.674141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.642 [2024-11-05 16:59:06.683495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.642 [2024-11-05 16:59:06.683837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.642 [2024-11-05 16:59:06.683854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.642 [2024-11-05 16:59:06.690728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.642 [2024-11-05 16:59:06.690947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.642 [2024-11-05 16:59:06.690967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.642 [2024-11-05 16:59:06.696263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.642 [2024-11-05 16:59:06.696565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.642 [2024-11-05 16:59:06.696583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.903 [2024-11-05 16:59:06.705434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.903 [2024-11-05 16:59:06.705739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.903 [2024-11-05 16:59:06.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.903 [2024-11-05 16:59:06.714282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.903 [2024-11-05 16:59:06.714602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.903 [2024-11-05 16:59:06.714620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.903 [2024-11-05 16:59:06.724472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.903 [2024-11-05 16:59:06.724735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.903 [2024-11-05 16:59:06.724756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.903 [2024-11-05 16:59:06.731129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.903 [2024-11-05 16:59:06.731444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.903 [2024-11-05 16:59:06.731463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.903 [2024-11-05 16:59:06.739178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.903 [2024-11-05 16:59:06.739548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.903 [2024-11-05 16:59:06.739565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.744651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.744857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.744874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.752771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.752971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.752988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.757051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.757296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.762350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.762550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.762569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.766523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.766722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.766739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.771329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.771529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.771545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.779584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.779790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.779806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.783939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.784140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.790025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.790224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.790241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.796233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.796579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.796597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.804077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.804377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.804394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.810926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.811115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.811132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.817678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.817876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.817893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.821909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.822097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.822113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.825955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.826165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.826181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.829906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.830082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.830098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.834811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.834986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.835002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.838961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.839140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.839156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.843008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.843206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.843223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.846559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.846736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.846756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.850244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.850420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.850436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.853791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.853969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.853985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.857758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.857936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.857953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.861498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.861676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.861693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.865803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.865982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.865999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.869946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.870125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.870142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.878752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.904 [2024-11-05 16:59:06.878946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.904 [2024-11-05 16:59:06.878963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.904 [2024-11-05 16:59:06.885981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.886161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.886178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.889837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.890015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.890031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.895643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.895951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.895972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.902446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.902739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.902761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.908908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.909122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.909138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.917514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.917734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.917755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.921434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.921612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.921628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.927865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.928175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.928193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.931879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.932057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.932074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.935669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.935852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.935868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.939628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.939810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.939827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.943417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.943603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.943623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.947088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.947265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.947284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.950915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.951094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.951111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.954499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.954676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.954693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.958092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.958270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.958286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.905 [2024-11-05 16:59:06.961744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:34:59.905 [2024-11-05 16:59:06.961928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-11-05 16:59:06.961945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.166 [2024-11-05 16:59:06.967158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.166 [2024-11-05 16:59:06.967421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.166 [2024-11-05 16:59:06.967439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.166 [2024-11-05 16:59:06.973880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.166 [2024-11-05 16:59:06.974171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.166 [2024-11-05 16:59:06.974188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.166 [2024-11-05 16:59:06.981699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.166 [2024-11-05 16:59:06.982006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.166 [2024-11-05 16:59:06.982023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.166 [2024-11-05 16:59:06.985760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.166 [2024-11-05 16:59:06.985938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.166 [2024-11-05 16:59:06.985955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.166 [2024-11-05 16:59:06.989390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.166 [2024-11-05 16:59:06.989565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.166 [2024-11-05 16:59:06.989582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.166 [2024-11-05 16:59:06.995170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:06.995438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:06.995456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.001777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.001957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.001973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.007613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.007918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.007935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.011717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.011908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.011924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.015795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.015975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.015991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.019862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.020037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.020054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.028333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.028536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.028556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.033638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.033821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.033838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.037652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.037836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.037852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.041252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.041430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.041447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.044848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.045024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.045041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.048451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.048627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.048643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.052019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.052196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.055866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.056042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.062507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.062784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.062802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.066890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.067167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.067185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.075035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.075252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.081978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.082283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.082301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.089179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.089449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.089467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.094900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.095221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.095238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.100923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.101241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.101258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.107050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.107275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.107291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.111023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.111199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.111215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.117138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.117323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.117340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.122599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.122781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.122797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.126562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.126737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.126758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.167 [2024-11-05 16:59:07.130777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.167 [2024-11-05 16:59:07.130956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.167 [2024-11-05 16:59:07.130972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.137911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.138185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.138203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.146132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.146430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.146449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.151043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.151275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.151291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.157719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.158143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.158160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.165437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.165689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.165706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.168 4777.00 IOPS, 597.12 MiB/s [2024-11-05T15:59:07.231Z] [2024-11-05 16:59:07.173555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.173900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.173918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.180614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.180856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.180872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.190550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.190752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.190769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.195666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.195871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.195888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.201422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.201601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.201618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.209448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.209769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.209788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.216984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.217264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.217283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.222925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.223103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.223120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.168 [2024-11-05 16:59:07.226865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.168 [2024-11-05 16:59:07.227067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.168 [2024-11-05 16:59:07.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.230659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.230841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.234527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.234704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.234720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.238615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.238909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.238927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.244330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.244511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.244527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.251281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.251656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.251674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.258352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.258625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.258643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.262276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.262453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.262469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.268531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.268808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.268825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.275881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.276101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.276121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.281078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.281255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.281271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.286332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.286515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.286531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.295355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.295534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.295551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.301772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.301953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.301969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.305381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.305557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.305573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.308981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.309160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.309176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.313040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.313216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.313233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.316841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.317018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.317034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.320678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.320865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.320881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.324238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.324416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.324432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.327825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.328003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.328019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.331388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.331563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.331579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.334946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.335124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.335140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.338463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.338638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.338655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.342010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.342187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.430 [2024-11-05 16:59:07.342203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.430 [2024-11-05 16:59:07.345529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.430 [2024-11-05 16:59:07.345706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.349119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.349298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.349314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.352656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.352839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.352855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.356290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.356467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.356483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.359834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.360011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.366442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.366964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.366982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.371960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.372137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.372154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.376029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.376208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.376224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.379959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.380136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.380153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.383557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.383734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.383755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.387124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.387300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.387320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.390932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.391109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.391125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.394480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.394658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.394674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.398029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.398206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.398222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.401572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.401753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.401770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.407558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.407853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.407871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.415655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.415933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.415951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.420773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.420953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.420969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.425289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.425604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.425621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.430165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.430346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.430362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.435358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.435536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.435553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.441981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.442158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.442175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.449425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.449786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.449805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.458000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.458320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.458339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.466761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.466974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.466991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.474516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.474899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.474917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.483086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.483356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.483374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.431 [2024-11-05 16:59:07.491432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.431 [2024-11-05 16:59:07.491756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.431 [2024-11-05 16:59:07.491773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.497177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.497413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.497431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.502268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.502443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.502460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.510055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.510233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.510250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.518300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.518484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.518500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.526115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.526295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.526311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.531892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.532074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.532090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.537361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.537577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.537594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.541770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.541951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.541967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.546377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.546554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.546573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.552946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.553125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.553142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.556805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.556983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.557000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.560724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.560907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.560923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.564645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.564825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.564841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.568614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.568814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.568831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.575135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.575400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.575418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.582001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.582315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.582332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.590674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.590907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.590924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.598128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.598313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.603968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.604144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.604161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.607855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.608033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.608049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.693 [2024-11-05 16:59:07.614427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.693 [2024-11-05 16:59:07.614712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.693 [2024-11-05 16:59:07.614730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.622571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.622786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.622803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.631033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.631329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.631347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.640833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.641149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.650284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.650541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.650558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.659904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.660220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.660238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.667852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.668255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.668273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.677202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.677527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.677544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.685332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.685616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.685633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.695529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.695805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.695823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.703961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.704279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.704298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.712656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.712841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.712859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.722789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.723077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.723095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.732491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.732815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.732833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.743227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.743414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.743435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.694 [2024-11-05 16:59:07.754161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.694 [2024-11-05 16:59:07.754315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.694 [2024-11-05 16:59:07.754331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.764614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.764966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.764984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.775337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.775637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.775655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.785972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.786183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.786199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.796773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.796957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.796974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.806213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.806394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.806411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.816200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.816474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.816492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.826136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.826324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.826340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.836711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.837032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.837050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.846595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.846940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.846959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.854529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.854942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.854960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.862261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.862486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.862503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.872173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.872476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.882767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.882962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.882978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.891483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.891882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.891900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.899649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.899951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.899969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.907742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.908097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.908115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.916320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.916484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.916501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.924945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.925204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.925222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.934262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.934555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.934572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.940804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.940978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.949387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.949553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.949570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.955366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.955524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.955544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.959819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.960011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.960029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.968579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.968886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.968905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.978223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.978543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.978565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.988856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.957 [2024-11-05 16:59:07.989194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.957 [2024-11-05 16:59:07.989212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.957 [2024-11-05 16:59:07.999496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.958 [2024-11-05 16:59:07.999744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.958 [2024-11-05 16:59:07.999765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.958 [2024-11-05 16:59:08.010216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:00.958 [2024-11-05 16:59:08.010490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.958 [2024-11-05 16:59:08.010507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.020890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.021176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.021193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.030075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.030380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.030397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.036539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.036704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.040629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.040788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.040805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.046442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.046717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.046735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.055530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.055799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.055816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.063898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.064165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.064181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.071134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.071328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.071344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.076419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.076561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.076577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.084089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.084303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.084319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.091342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.091482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.091498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.099250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.099529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.099545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.107009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.107378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.107394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.112438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.112648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.112663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.121043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.121179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.121194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.129076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.129347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.129364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.134538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.134657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.134673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.140992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.141311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.141328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.148798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.149047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.149063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.153565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.153681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.220 [2024-11-05 16:59:08.161708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.162065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.162081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.220 4663.00 IOPS, 582.88 MiB/s [2024-11-05T15:59:08.283Z] [2024-11-05 16:59:08.171147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb51860) with pdu=0x2000166fef90 00:35:01.220 [2024-11-05 16:59:08.171433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.220 [2024-11-05 16:59:08.171449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.220 00:35:01.220 Latency(us) 00:35:01.220 [2024-11-05T15:59:08.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.220 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:01.220 nvme0n1 : 2.00 4659.52 582.44 0.00 0.00 3428.49 1617.92 14745.60 00:35:01.220 [2024-11-05T15:59:08.283Z] =================================================================================================================== 00:35:01.220 [2024-11-05T15:59:08.283Z] Total : 4659.52 582.44 0.00 0.00 3428.49 1617.92 14745.60 00:35:01.220 { 00:35:01.220 "results": [ 00:35:01.220 { 00:35:01.220 "job": "nvme0n1", 00:35:01.220 "core_mask": "0x2", 00:35:01.220 "workload": "randwrite", 00:35:01.220 "status": "finished", 00:35:01.220 "queue_depth": 16, 00:35:01.220 "io_size": 131072, 00:35:01.220 "runtime": 2.004928, 00:35:01.220 "iops": 4659.518945318735, 00:35:01.220 "mibps": 582.4398681648419, 00:35:01.220 "io_failed": 0, 00:35:01.220 "io_timeout": 0, 00:35:01.220 "avg_latency_us": 3428.4916463284094, 00:35:01.220 "min_latency_us": 1617.92, 00:35:01.220 "max_latency_us": 14745.6 00:35:01.220 } 00:35:01.220 ], 00:35:01.220 "core_count": 1 00:35:01.220 } 00:35:01.220 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:01.220 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:01.220 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:01.220 | .driver_specific 00:35:01.220 | .nvme_error 00:35:01.220 | .status_code 00:35:01.220 | .command_transient_transport_error' 00:35:01.220 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 301 > 0 )) 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3355592 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3355592 ']' 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3355592 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3355592 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3355592' 00:35:01.482 killing process with pid 3355592 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3355592 00:35:01.482 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.482 00:35:01.482 Latency(us) 00:35:01.482 [2024-11-05T15:59:08.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.482 [2024-11-05T15:59:08.545Z] =================================================================================================================== 00:35:01.482 [2024-11-05T15:59:08.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3355592 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3352996 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3352996 ']' 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3352996 00:35:01.482 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3352996 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3352996' 00:35:01.742 killing process with pid 3352996 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3352996 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3352996 00:35:01.742 00:35:01.742 real 0m16.699s 00:35:01.742 user 0m33.154s 00:35:01.742 sys 0m3.431s 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.742 ************************************ 00:35:01.742 END TEST nvmf_digest_error 00:35:01.742 ************************************ 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:01.742 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:01.742 rmmod nvme_tcp 00:35:01.742 rmmod nvme_fabrics 00:35:02.004 rmmod nvme_keyring 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 3352996 ']' 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 3352996 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3352996 ']' 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3352996 00:35:02.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3352996) - No such process 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3352996 is not found' 00:35:02.004 Process with pid 3352996 is not found 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:02.004 16:59:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:35:03.919 00:35:03.919 real 0m42.400s 00:35:03.919 user 1m6.583s 00:35:03.919 sys 0m12.456s 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.919 ************************************ 00:35:03.919 END TEST nvmf_digest 00:35:03.919 ************************************ 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:03.919 16:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.182 ************************************ 00:35:04.182 START TEST nvmf_bdevperf 00:35:04.182 ************************************ 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:04.182 * Looking for test storage... 00:35:04.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.182 --rc genhtml_branch_coverage=1 00:35:04.182 --rc genhtml_function_coverage=1 00:35:04.182 --rc genhtml_legend=1 00:35:04.182 --rc geninfo_all_blocks=1 00:35:04.182 --rc geninfo_unexecuted_blocks=1 00:35:04.182 00:35:04.182 ' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.182 --rc genhtml_branch_coverage=1 00:35:04.182 --rc genhtml_function_coverage=1 00:35:04.182 --rc genhtml_legend=1 00:35:04.182 --rc geninfo_all_blocks=1 00:35:04.182 --rc geninfo_unexecuted_blocks=1 00:35:04.182 00:35:04.182 ' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.182 --rc genhtml_branch_coverage=1 00:35:04.182 --rc genhtml_function_coverage=1 00:35:04.182 --rc genhtml_legend=1 00:35:04.182 --rc geninfo_all_blocks=1 00:35:04.182 --rc geninfo_unexecuted_blocks=1 00:35:04.182 00:35:04.182 ' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.182 --rc genhtml_branch_coverage=1 00:35:04.182 --rc genhtml_function_coverage=1 00:35:04.182 --rc genhtml_legend=1 00:35:04.182 --rc geninfo_all_blocks=1 00:35:04.182 --rc geninfo_unexecuted_blocks=1 00:35:04.182 00:35:04.182 ' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.182 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:04.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:04.183 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:04.443 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:35:04.443 16:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:11.028 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:11.029 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:11.029 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:11.029 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:11.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@247 -- # create_target_ns 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:11.029 16:59:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:11.290 10.0.0.1 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:11.290 10.0.0.2 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:11.290 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:11.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:35:11.291 00:35:11.291 --- 10.0.0.1 ping statistics --- 00:35:11.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.291 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:11.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:35:11.291 00:35:11.291 --- 10.0.0.2 ping statistics --- 00:35:11.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.291 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:11.291 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target1 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:11.553 ' 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=3360366 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 3360366 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3360366 ']' 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:11.553 16:59:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:11.553 [2024-11-05 16:59:18.479011] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:35:11.553 [2024-11-05 16:59:18.479080] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.553 [2024-11-05 16:59:18.583299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:11.813 [2024-11-05 16:59:18.635817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.813 [2024-11-05 16:59:18.635872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.813 [2024-11-05 16:59:18.635881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.813 [2024-11-05 16:59:18.635892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.813 [2024-11-05 16:59:18.635899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.813 [2024-11-05 16:59:18.637723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.813 [2024-11-05 16:59:18.637771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.813 [2024-11-05 16:59:18.637779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.384 [2024-11-05 16:59:19.333327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.384 Malloc0 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:12.384 [2024-11-05 16:59:19.399783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:35:12.384 { 00:35:12.384 "params": { 00:35:12.384 "name": "Nvme$subsystem", 00:35:12.384 "trtype": "$TEST_TRANSPORT", 00:35:12.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:12.384 "adrfam": "ipv4", 00:35:12.384 "trsvcid": "$NVMF_PORT", 00:35:12.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:12.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:12.384 "hdgst": ${hdgst:-false}, 00:35:12.384 "ddgst": ${ddgst:-false} 00:35:12.384 }, 00:35:12.384 "method": "bdev_nvme_attach_controller" 00:35:12.384 } 00:35:12.384 EOF 00:35:12.384 )") 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:35:12.384 16:59:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:35:12.384 "params": { 00:35:12.384 "name": "Nvme1", 00:35:12.384 "trtype": "tcp", 00:35:12.384 "traddr": "10.0.0.2", 00:35:12.384 "adrfam": "ipv4", 00:35:12.384 "trsvcid": "4420", 00:35:12.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:12.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:12.384 "hdgst": false, 00:35:12.384 "ddgst": false 00:35:12.384 }, 00:35:12.384 "method": "bdev_nvme_attach_controller" 00:35:12.384 }' 00:35:12.645 [2024-11-05 16:59:19.465837] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:35:12.645 [2024-11-05 16:59:19.465889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360708 ] 00:35:12.645 [2024-11-05 16:59:19.535856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.645 [2024-11-05 16:59:19.571911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.905 Running I/O for 1 seconds... 00:35:13.846 9017.00 IOPS, 35.22 MiB/s 00:35:13.846 Latency(us) 00:35:13.846 [2024-11-05T15:59:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.846 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:13.846 Verification LBA range: start 0x0 length 0x4000 00:35:13.846 Nvme1n1 : 1.01 9047.74 35.34 0.00 0.00 14089.55 3112.96 15182.51 00:35:13.846 [2024-11-05T15:59:20.909Z] =================================================================================================================== 00:35:13.846 [2024-11-05T15:59:20.909Z] Total : 9047.74 35.34 0.00 0.00 14089.55 3112.96 15182.51 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3361002 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:35:13.846 { 00:35:13.846 "params": { 00:35:13.846 "name": "Nvme$subsystem", 00:35:13.846 "trtype": "$TEST_TRANSPORT", 00:35:13.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.846 "adrfam": "ipv4", 00:35:13.846 "trsvcid": "$NVMF_PORT", 00:35:13.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.846 "hdgst": ${hdgst:-false}, 00:35:13.846 "ddgst": ${ddgst:-false} 00:35:13.846 }, 00:35:13.846 "method": "bdev_nvme_attach_controller" 00:35:13.846 } 00:35:13.846 EOF 00:35:13.846 )") 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:35:13.846 16:59:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:35:13.846 "params": { 00:35:13.846 "name": "Nvme1", 00:35:13.846 "trtype": "tcp", 00:35:13.846 "traddr": "10.0.0.2", 00:35:13.846 "adrfam": "ipv4", 00:35:13.846 "trsvcid": "4420", 00:35:13.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.846 "hdgst": false, 00:35:13.846 "ddgst": false 00:35:13.846 }, 00:35:13.846 "method": "bdev_nvme_attach_controller" 00:35:13.846 }' 00:35:13.846 [2024-11-05 16:59:20.898625] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:35:13.846 [2024-11-05 16:59:20.898682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3361002 ] 00:35:14.106 [2024-11-05 16:59:20.969886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.106 [2024-11-05 16:59:21.005061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.366 Running I/O for 15 seconds... 00:35:16.247 9013.00 IOPS, 35.21 MiB/s [2024-11-05T15:59:23.882Z] 9703.00 IOPS, 37.90 MiB/s [2024-11-05T15:59:23.882Z] 16:59:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3360366 00:35:16.819 16:59:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:16.819 [2024-11-05 16:59:23.864501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.864987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.864996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.819 [2024-11-05 16:59:23.865155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.819 [2024-11-05 16:59:23.865166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.820 [2024-11-05 16:59:23.865606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.820 [2024-11-05 16:59:23.865855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.820 [2024-11-05 16:59:23.865866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.865986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.865993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.821 [2024-11-05 16:59:23.866546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.821 [2024-11-05 16:59:23.866555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-11-05 16:59:23.866787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a63f0 is same with the state(6) to be set 00:35:16.822 [2024-11-05 16:59:23.866805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:16.822 [2024-11-05 16:59:23.866811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:16.822 [2024-11-05 16:59:23.866818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83856 len:8 PRP1 0x0 PRP2 0x0 00:35:16.822 [2024-11-05 16:59:23.866826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:16.822 [2024-11-05 16:59:23.866918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:16.822 [2024-11-05 16:59:23.866935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:16.822 [2024-11-05 16:59:23.866950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:16.822 [2024-11-05 16:59:23.866966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.822 [2024-11-05 16:59:23.866974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:16.822 [2024-11-05 16:59:23.870504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.822 [2024-11-05 16:59:23.870526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:16.822 [2024-11-05 16:59:23.871338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.822 [2024-11-05 16:59:23.871357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:16.822 [2024-11-05 16:59:23.871365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:16.822 [2024-11-05 16:59:23.871585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:16.822 [2024-11-05 16:59:23.871816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.822 [2024-11-05 16:59:23.871827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.822 [2024-11-05 16:59:23.871836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.822 [2024-11-05 16:59:23.871845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.084 [2024-11-05 16:59:23.884609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.084 [2024-11-05 16:59:23.885209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.084 [2024-11-05 16:59:23.885228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.084 [2024-11-05 16:59:23.885236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.084 [2024-11-05 16:59:23.885456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.084 [2024-11-05 16:59:23.885676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.084 [2024-11-05 16:59:23.885685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.084 [2024-11-05 16:59:23.885693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.084 [2024-11-05 16:59:23.885700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.084 [2024-11-05 16:59:23.898548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.084 [2024-11-05 16:59:23.899006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.084 [2024-11-05 16:59:23.899024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.084 [2024-11-05 16:59:23.899032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.084 [2024-11-05 16:59:23.899251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.084 [2024-11-05 16:59:23.899471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.084 [2024-11-05 16:59:23.899480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.084 [2024-11-05 16:59:23.899488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.084 [2024-11-05 16:59:23.899494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.084 [2024-11-05 16:59:23.912488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.084 [2024-11-05 16:59:23.913139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.084 [2024-11-05 16:59:23.913179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.084 [2024-11-05 16:59:23.913190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.084 [2024-11-05 16:59:23.913432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.084 [2024-11-05 16:59:23.913656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.084 [2024-11-05 16:59:23.913666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.084 [2024-11-05 16:59:23.913678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.084 [2024-11-05 16:59:23.913686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.084 [2024-11-05 16:59:23.926455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.084 [2024-11-05 16:59:23.927096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.084 [2024-11-05 16:59:23.927117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.084 [2024-11-05 16:59:23.927125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.084 [2024-11-05 16:59:23.927344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.084 [2024-11-05 16:59:23.927564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.084 [2024-11-05 16:59:23.927574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.084 [2024-11-05 16:59:23.927581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.084 [2024-11-05 16:59:23.927588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.084 [2024-11-05 16:59:23.940345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.084 [2024-11-05 16:59:23.941069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.084 [2024-11-05 16:59:23.941108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.084 [2024-11-05 16:59:23.941120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.084 [2024-11-05 16:59:23.941361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.084 [2024-11-05 16:59:23.941584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.084 [2024-11-05 16:59:23.941594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.084 [2024-11-05 16:59:23.941602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.084 [2024-11-05 16:59:23.941610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.084 [2024-11-05 16:59:23.954176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.084 [2024-11-05 16:59:23.954835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.084 [2024-11-05 16:59:23.954874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.084 [2024-11-05 16:59:23.954886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.084 [2024-11-05 16:59:23.955129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.084 [2024-11-05 16:59:23.955353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.084 [2024-11-05 16:59:23.955363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:23.955371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:23.955379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:23.968167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:23.968740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:23.968787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:23.968798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:23.969037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:23.969261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:23.969270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:23.969278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:23.969286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:23.982049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:23.982588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:23.982608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:23.982616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:23.982841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:23.983062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:23.983072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:23.983080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:23.983087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:23.995850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:23.996481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:23.996519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:23.996530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:23.996777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:23.997002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:23.997011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:23.997019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:23.997028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:24.009802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:24.010485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:24.010524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:24.010540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:24.010786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:24.011011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:24.011021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:24.011029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:24.011037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:24.023802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:24.024434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:24.024474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:24.024484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:24.024723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:24.024956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:24.024967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:24.024974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:24.024982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:24.037958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:24.038546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:24.038565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:24.038573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:24.038799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:24.039021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:24.039031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:24.039039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:24.039047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:24.051806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:24.052435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:24.052473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:24.052484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:24.052722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:24.052961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:24.052972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:24.052981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:24.052989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:24.065763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:24.066229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:24.066250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:24.066258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:24.066479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.085 [2024-11-05 16:59:24.066699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.085 [2024-11-05 16:59:24.066708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.085 [2024-11-05 16:59:24.066715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.085 [2024-11-05 16:59:24.066722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.085 [2024-11-05 16:59:24.079693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.085 [2024-11-05 16:59:24.080269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.085 [2024-11-05 16:59:24.080286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.085 [2024-11-05 16:59:24.080294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.085 [2024-11-05 16:59:24.080514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.086 [2024-11-05 16:59:24.080734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.086 [2024-11-05 16:59:24.080743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.086 [2024-11-05 16:59:24.080756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.086 [2024-11-05 16:59:24.080763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.086 [2024-11-05 16:59:24.093638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.086 [2024-11-05 16:59:24.094174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.086 [2024-11-05 16:59:24.094192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.086 [2024-11-05 16:59:24.094199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.086 [2024-11-05 16:59:24.094419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.086 [2024-11-05 16:59:24.094639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.086 [2024-11-05 16:59:24.094649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.086 [2024-11-05 16:59:24.094661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.086 [2024-11-05 16:59:24.094668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.086 [2024-11-05 16:59:24.107438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.086 [2024-11-05 16:59:24.108073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.086 [2024-11-05 16:59:24.108113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.086 [2024-11-05 16:59:24.108123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.086 [2024-11-05 16:59:24.108362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.086 [2024-11-05 16:59:24.108585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.086 [2024-11-05 16:59:24.108596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.086 [2024-11-05 16:59:24.108604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.086 [2024-11-05 16:59:24.108612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.086 [2024-11-05 16:59:24.121383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.086 [2024-11-05 16:59:24.122069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.086 [2024-11-05 16:59:24.122107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.086 [2024-11-05 16:59:24.122120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.086 [2024-11-05 16:59:24.122359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.086 [2024-11-05 16:59:24.122583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.086 [2024-11-05 16:59:24.122593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.086 [2024-11-05 16:59:24.122601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.086 [2024-11-05 16:59:24.122609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.086 [2024-11-05 16:59:24.135389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.086 [2024-11-05 16:59:24.135964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.086 [2024-11-05 16:59:24.135985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.086 [2024-11-05 16:59:24.135993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.086 [2024-11-05 16:59:24.136214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.086 [2024-11-05 16:59:24.136434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.086 [2024-11-05 16:59:24.136443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.086 [2024-11-05 16:59:24.136451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.086 [2024-11-05 16:59:24.136459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.348 [2024-11-05 16:59:24.149252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.348 [2024-11-05 16:59:24.149772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.348 [2024-11-05 16:59:24.149791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.348 [2024-11-05 16:59:24.149799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.348 [2024-11-05 16:59:24.150018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.348 [2024-11-05 16:59:24.150238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.348 [2024-11-05 16:59:24.150247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.348 [2024-11-05 16:59:24.150254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.348 [2024-11-05 16:59:24.150262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.348 [2024-11-05 16:59:24.163243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.348 [2024-11-05 16:59:24.163849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.348 [2024-11-05 16:59:24.163888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.348 [2024-11-05 16:59:24.163901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.348 [2024-11-05 16:59:24.164141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.348 [2024-11-05 16:59:24.164364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.164374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.164382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.164390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.177161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.177740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.177767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.177775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.177995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.178216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.178225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.178233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.178240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.190997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.191658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.191696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.191713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.191961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.192186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.192196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.192205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.192213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.205000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.205559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.205580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.205589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.205814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.206035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.206045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.206053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.206061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 8930.67 IOPS, 34.89 MiB/s [2024-11-05T15:59:24.412Z] [2024-11-05 16:59:24.218807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.219374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.219393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.219402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.219622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.219848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.219858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.219866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.219873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.232624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.233195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.233212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.233220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.233439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.233665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.233675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.233683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.233690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.246449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.247118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.247157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.247168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.247408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.247633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.247644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.247652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.247660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.260442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.261898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.262125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.262347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.262356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.262365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.262372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.274303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.274893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.274912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.274920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.275140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.275361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.275371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.275382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.275389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.288183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.288756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.288774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.288783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.289002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.349 [2024-11-05 16:59:24.289222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.349 [2024-11-05 16:59:24.289230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.349 [2024-11-05 16:59:24.289238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.349 [2024-11-05 16:59:24.289245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.349 [2024-11-05 16:59:24.302009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.349 [2024-11-05 16:59:24.302569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.349 [2024-11-05 16:59:24.302586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.349 [2024-11-05 16:59:24.302595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.349 [2024-11-05 16:59:24.302827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.303048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.303058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.303065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.303072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.315826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.316454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.316493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.316504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.316743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.316976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.316986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.316994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.317002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.329772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.330457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.330497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.330508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.330754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.330979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.330990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.330998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.331006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.343565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.344209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.344247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.344258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.344496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.344720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.344730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.344738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.344754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.357515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.358180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.358218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.358230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.358468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.358692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.358701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.358709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.358717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.371501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.372054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.372075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.372088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.372308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.372529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.372539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.372546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.372553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.385316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.385866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.385884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.385892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.386112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.386331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.386340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.386347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.386354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.350 [2024-11-05 16:59:24.399109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.350 [2024-11-05 16:59:24.399742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.350 [2024-11-05 16:59:24.399787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.350 [2024-11-05 16:59:24.399800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.350 [2024-11-05 16:59:24.400040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.350 [2024-11-05 16:59:24.400263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.350 [2024-11-05 16:59:24.400273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.350 [2024-11-05 16:59:24.400282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.350 [2024-11-05 16:59:24.400290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.413102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.413683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.413703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.413712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.413938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.414164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.414174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.414181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.414189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.426949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.427585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.427623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.427634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.427881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.428106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.428116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.428124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.428133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.440899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.441571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.441610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.441621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.441868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.442094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.442103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.442111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.442119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.454886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.455560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.455599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.455610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.455856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.456081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.456091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.456103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.456111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.468884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.469436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.469475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.469486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.469724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.469958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.469968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.469976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.469985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.482751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.483381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.483419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.483431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.483669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.483902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.483913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.483921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.483929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.496690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.497366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.497405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.497416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.497655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.497888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.497899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.497907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.497915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.510687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.511273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.511294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.511302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.511523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.511744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.511762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.511769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.511777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.524528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.525091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.525099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.611 [2024-11-05 16:59:24.525318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.611 [2024-11-05 16:59:24.525537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.611 [2024-11-05 16:59:24.525546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.611 [2024-11-05 16:59:24.525554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.611 [2024-11-05 16:59:24.525561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.611 [2024-11-05 16:59:24.538319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.611 [2024-11-05 16:59:24.538965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.611 [2024-11-05 16:59:24.539003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.611 [2024-11-05 16:59:24.539013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.539252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.539476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.539485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.539493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.539501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.552265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.552865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.552904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.552920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.553158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.553382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.553392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.553400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.553408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.566193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.566885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.566923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.566934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.567173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.567397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.567406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.567414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.567422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.580190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.580860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.580900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.580911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.581149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.581372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.581382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.581390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.581398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.594165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.594833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.594872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.594884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.595126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.595355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.595365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.595374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.595382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.607961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.608637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.608675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.608686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.608935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.609160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.609170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.609178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.609187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.621948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.622580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.622618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.622631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.622881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.623107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.623117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.623125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.623133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.635900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.636581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.636592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.636837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.637062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.637072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.637084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.637092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.649854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.650529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.650568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.650578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.650825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.651049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.651060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.651068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.651076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.612 [2024-11-05 16:59:24.663853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.612 [2024-11-05 16:59:24.664432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.612 [2024-11-05 16:59:24.664452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.612 [2024-11-05 16:59:24.664460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.612 [2024-11-05 16:59:24.664680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.612 [2024-11-05 16:59:24.664909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.612 [2024-11-05 16:59:24.664919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.612 [2024-11-05 16:59:24.664926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.612 [2024-11-05 16:59:24.664933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.677683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.678246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.678263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.678271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.678491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.678710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.678719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.678727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.678734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.691499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.692023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.692041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.692048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.692267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.692488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.692496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.692504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.692510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.705479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.706112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.706151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.706162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.706400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.706624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.706634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.706642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.706651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.719412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.720052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.720091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.720102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.720340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.720564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.720574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.720582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.720590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.733366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.734046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.734084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.734100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.734338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.734562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.734572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.734580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.734588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.747350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.747884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.747923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.747935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.748175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.748399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.748408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.748416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.748424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.761198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.761863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.761902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.761913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.762151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.762375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.762385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.762392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.762401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.775183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.775760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.775780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.874 [2024-11-05 16:59:24.775789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.874 [2024-11-05 16:59:24.776008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.874 [2024-11-05 16:59:24.776234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.874 [2024-11-05 16:59:24.776243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.874 [2024-11-05 16:59:24.776251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.874 [2024-11-05 16:59:24.776257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.874 [2024-11-05 16:59:24.789053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.874 [2024-11-05 16:59:24.789670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.874 [2024-11-05 16:59:24.789709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.789720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.789967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.790191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.790201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.790209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.790217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.802993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.803666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.803704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.803715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.803962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.804187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.804197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.804205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.804214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.816973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.817670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.817708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.817719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.817967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.818192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.818201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.818213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.818222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.830771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.831294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.831333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.831344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.831582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.831815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.831826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.831834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.831841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.844609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.845252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.845291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.845302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.845541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.845775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.845785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.845793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.845801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.858557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.859234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.859272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.859283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.859521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.859745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.859766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.859773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.859782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.872551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.873165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.873202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.873215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.873455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.873682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.873692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.873701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.873711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.886490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.887181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.887220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.887231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.887470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.887694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.887704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.887712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.887720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.900385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.900956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.900977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.900986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.901205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.901426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.901435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.901442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.901449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.914226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.875 [2024-11-05 16:59:24.914958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.875 [2024-11-05 16:59:24.914997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.875 [2024-11-05 16:59:24.915016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.875 [2024-11-05 16:59:24.915254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.875 [2024-11-05 16:59:24.915478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.875 [2024-11-05 16:59:24.915488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.875 [2024-11-05 16:59:24.915496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.875 [2024-11-05 16:59:24.915504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:17.875 [2024-11-05 16:59:24.928063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:17.876 [2024-11-05 16:59:24.928703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.876 [2024-11-05 16:59:24.928741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:17.876 [2024-11-05 16:59:24.928761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:17.876 [2024-11-05 16:59:24.928999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:17.876 [2024-11-05 16:59:24.929223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:17.876 [2024-11-05 16:59:24.929233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:17.876 [2024-11-05 16:59:24.929241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:17.876 [2024-11-05 16:59:24.929249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:24.942010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:24.942670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:24.942709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:24.942722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:24.942971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:24.943196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:24.943206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:24.943214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:24.943222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:24.955982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:24.956610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:24.956648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:24.956659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:24.956907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:24.957137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:24.957147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:24.957154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:24.957163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:24.969930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:24.970580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:24.970619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:24.970630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:24.970878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:24.971103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:24.971112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:24.971120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:24.971128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:24.983883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:24.984504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:24.984543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:24.984554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:24.984802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:24.985027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:24.985039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:24.985047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:24.985055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:24.997813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:24.998453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:24.998492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:24.998503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:24.998741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:24.998975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:24.998985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:24.998998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:24.999006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:25.011780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:25.012408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:25.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:25.012458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:25.012696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:25.012929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:25.012940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:25.012948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:25.012957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:25.025715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:25.026390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:25.026429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:25.026440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:25.026678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:25.026912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:25.026923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:25.026931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:25.026939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:25.039911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:25.040495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:25.040515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:25.040524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:25.040744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:25.040974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:25.040983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:25.040990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:25.040997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:25.053754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:25.054281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:25.054299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.137 [2024-11-05 16:59:25.054306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.137 [2024-11-05 16:59:25.054526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.137 [2024-11-05 16:59:25.054752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.137 [2024-11-05 16:59:25.054762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.137 [2024-11-05 16:59:25.054769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.137 [2024-11-05 16:59:25.054776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.137 [2024-11-05 16:59:25.067538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.137 [2024-11-05 16:59:25.068119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.137 [2024-11-05 16:59:25.068137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.068145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.068364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.068583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.068592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.068600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.068606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.081360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.081877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.081916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.081929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.082170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.082394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.082403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.082411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.082419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.095203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.095701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.095721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.095733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.095960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.096180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.096190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.096197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.096204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.109258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.109892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.109931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.109942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.110181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.110405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.110415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.110423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.110431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.123198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.123855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.123894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.123906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.124145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.124369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.124380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.124388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.124396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.137176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.137809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.137848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.137860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.138099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.138328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.138338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.138345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.138354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.151120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.151846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.151885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.151896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.152134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.152359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.152368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.152376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.152384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.164946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.165596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.165635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.165646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.165895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.166119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.166129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.166137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.166145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.138 [2024-11-05 16:59:25.178902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.138 [2024-11-05 16:59:25.179516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.138 [2024-11-05 16:59:25.179555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.138 [2024-11-05 16:59:25.179565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.138 [2024-11-05 16:59:25.179815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.138 [2024-11-05 16:59:25.180039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.138 [2024-11-05 16:59:25.180050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.138 [2024-11-05 16:59:25.180062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.138 [2024-11-05 16:59:25.180070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.139 [2024-11-05 16:59:25.192830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.139 [2024-11-05 16:59:25.193408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.139 [2024-11-05 16:59:25.193428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.139 [2024-11-05 16:59:25.193436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.139 [2024-11-05 16:59:25.193656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.139 [2024-11-05 16:59:25.193885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.139 [2024-11-05 16:59:25.193896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.139 [2024-11-05 16:59:25.193903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.139 [2024-11-05 16:59:25.193910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.400 [2024-11-05 16:59:25.206686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.400 [2024-11-05 16:59:25.207254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.400 [2024-11-05 16:59:25.207272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.400 [2024-11-05 16:59:25.207280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.400 [2024-11-05 16:59:25.207499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.400 [2024-11-05 16:59:25.207719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.400 [2024-11-05 16:59:25.207730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.400 [2024-11-05 16:59:25.207737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.400 [2024-11-05 16:59:25.207744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.400 6698.00 IOPS, 26.16 MiB/s [2024-11-05T15:59:25.463Z] [2024-11-05 16:59:25.220474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.400 [2024-11-05 16:59:25.221098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.400 [2024-11-05 16:59:25.221137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.400 [2024-11-05 16:59:25.221148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.400 [2024-11-05 16:59:25.221387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.400 [2024-11-05 16:59:25.221611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.400 [2024-11-05 16:59:25.221620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.400 [2024-11-05 16:59:25.221628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.400 [2024-11-05 16:59:25.221636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.400 [2024-11-05 16:59:25.234404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.400 [2024-11-05 16:59:25.235062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.400 [2024-11-05 16:59:25.235100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.400 [2024-11-05 16:59:25.235111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.400 [2024-11-05 16:59:25.235350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.400 [2024-11-05 16:59:25.235574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.400 [2024-11-05 16:59:25.235583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.400 [2024-11-05 16:59:25.235591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.400 [2024-11-05 16:59:25.235599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.400 [2024-11-05 16:59:25.248369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.400 [2024-11-05 16:59:25.249055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.400 [2024-11-05 16:59:25.249094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.400 [2024-11-05 16:59:25.249104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.400 [2024-11-05 16:59:25.249343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.400 [2024-11-05 16:59:25.249567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.400 [2024-11-05 16:59:25.249577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.400 [2024-11-05 16:59:25.249585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.400 [2024-11-05 16:59:25.249593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.400 [2024-11-05 16:59:25.262357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.400 [2024-11-05 16:59:25.263008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.400 [2024-11-05 16:59:25.263046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.400 [2024-11-05 16:59:25.263057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.400 [2024-11-05 16:59:25.263295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.400 [2024-11-05 16:59:25.263519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.263528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.263537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.263545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.276315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.276978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.277033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.277271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.277496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.277505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.277513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.277521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.290301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.290872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.290911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.290923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.291165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.291388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.291399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.291407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.291415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.304190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.304819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.304858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.304871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.305114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.305340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.305351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.305359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.305367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.318130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.318789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.318827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.318840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.319082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.319311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.319322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.319329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.319337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.332113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.332807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.332846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.332858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.333100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.333324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.333334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.333342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.333350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.345921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.346621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.346659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.346670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.346919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.347144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.347154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.347162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.347170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.359722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.360302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.360322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.360330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.360550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.360779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.360789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.360801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.360809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.401 [2024-11-05 16:59:25.373565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.401 [2024-11-05 16:59:25.374210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.401 [2024-11-05 16:59:25.374250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.401 [2024-11-05 16:59:25.374262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.401 [2024-11-05 16:59:25.374501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.401 [2024-11-05 16:59:25.374725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.401 [2024-11-05 16:59:25.374736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.401 [2024-11-05 16:59:25.374744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.401 [2024-11-05 16:59:25.374763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.402 [2024-11-05 16:59:25.387534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.402 [2024-11-05 16:59:25.388232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.402 [2024-11-05 16:59:25.388271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.402 [2024-11-05 16:59:25.388282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.402 [2024-11-05 16:59:25.388521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.402 [2024-11-05 16:59:25.388744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.402 [2024-11-05 16:59:25.388765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.402 [2024-11-05 16:59:25.388772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.402 [2024-11-05 16:59:25.388781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.402 [2024-11-05 16:59:25.401535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.402 [2024-11-05 16:59:25.402177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.402 [2024-11-05 16:59:25.402215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.402 [2024-11-05 16:59:25.402226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.402 [2024-11-05 16:59:25.402464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.402 [2024-11-05 16:59:25.402688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.402 [2024-11-05 16:59:25.402698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.402 [2024-11-05 16:59:25.402705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.402 [2024-11-05 16:59:25.402714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.402 [2024-11-05 16:59:25.415492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.402 [2024-11-05 16:59:25.416074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.402 [2024-11-05 16:59:25.416095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.402 [2024-11-05 16:59:25.416103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.402 [2024-11-05 16:59:25.416323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.402 [2024-11-05 16:59:25.416544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.402 [2024-11-05 16:59:25.416553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.402 [2024-11-05 16:59:25.416560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.402 [2024-11-05 16:59:25.416567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.402 [2024-11-05 16:59:25.429321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.402 [2024-11-05 16:59:25.429991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.402 [2024-11-05 16:59:25.430030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.402 [2024-11-05 16:59:25.430040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.402 [2024-11-05 16:59:25.430279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.402 [2024-11-05 16:59:25.430503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.402 [2024-11-05 16:59:25.430513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.402 [2024-11-05 16:59:25.430520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.402 [2024-11-05 16:59:25.430528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.402 [2024-11-05 16:59:25.443292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.402 [2024-11-05 16:59:25.443973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.402 [2024-11-05 16:59:25.444012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.402 [2024-11-05 16:59:25.444023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.402 [2024-11-05 16:59:25.444261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.402 [2024-11-05 16:59:25.444485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.402 [2024-11-05 16:59:25.444495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.402 [2024-11-05 16:59:25.444503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.402 [2024-11-05 16:59:25.444511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.402 [2024-11-05 16:59:25.457300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.402 [2024-11-05 16:59:25.458034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.402 [2024-11-05 16:59:25.458072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.402 [2024-11-05 16:59:25.458088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.402 [2024-11-05 16:59:25.458326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.402 [2024-11-05 16:59:25.458550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.402 [2024-11-05 16:59:25.458559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.402 [2024-11-05 16:59:25.458568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.402 [2024-11-05 16:59:25.458576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.663 [2024-11-05 16:59:25.471150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.663 [2024-11-05 16:59:25.471688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.663 [2024-11-05 16:59:25.471708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.663 [2024-11-05 16:59:25.471716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.663 [2024-11-05 16:59:25.471942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.663 [2024-11-05 16:59:25.472163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.663 [2024-11-05 16:59:25.472172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.663 [2024-11-05 16:59:25.472180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.663 [2024-11-05 16:59:25.472187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.663 [2024-11-05 16:59:25.485147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.663 [2024-11-05 16:59:25.485803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.663 [2024-11-05 16:59:25.485842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.663 [2024-11-05 16:59:25.485854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.663 [2024-11-05 16:59:25.486094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.663 [2024-11-05 16:59:25.486317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.663 [2024-11-05 16:59:25.486327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.663 [2024-11-05 16:59:25.486335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.663 [2024-11-05 16:59:25.486343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.663 [2024-11-05 16:59:25.499126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.663 [2024-11-05 16:59:25.499793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.663 [2024-11-05 16:59:25.499832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.663 [2024-11-05 16:59:25.499845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.663 [2024-11-05 16:59:25.500086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.663 [2024-11-05 16:59:25.500315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.663 [2024-11-05 16:59:25.500326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.663 [2024-11-05 16:59:25.500334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.663 [2024-11-05 16:59:25.500342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.663 [2024-11-05 16:59:25.513129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.663 [2024-11-05 16:59:25.513708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.663 [2024-11-05 16:59:25.513729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.663 [2024-11-05 16:59:25.513737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.663 [2024-11-05 16:59:25.513963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.663 [2024-11-05 16:59:25.514184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.663 [2024-11-05 16:59:25.514193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.663 [2024-11-05 16:59:25.514201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.663 [2024-11-05 16:59:25.514208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.663 [2024-11-05 16:59:25.526965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.663 [2024-11-05 16:59:25.527522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.663 [2024-11-05 16:59:25.527539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.663 [2024-11-05 16:59:25.527547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.663 [2024-11-05 16:59:25.527771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.527991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.528002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.528009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.528016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.540784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.541348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.541364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.541372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.541590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.541815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.541825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.541836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.541842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.554628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.555208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.555226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.555234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.555452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.555672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.555681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.555688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.555695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.568490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.568904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.568922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.568930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.569149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.569369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.569378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.569386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.569393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.582383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.583029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.583069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.583079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.583317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.583541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.583552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.583560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.583568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.596339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.596879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.596899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.596907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.597127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.597348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.597357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.597364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.597371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.610149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.610811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.610851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.610864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.611106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.611330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.611339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.611347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.611356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.624139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.624806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.624845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.624856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.625095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.625319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.625329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.625337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.625345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.638115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.638536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.638556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.638569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.638795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.639017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.639027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.639034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.639041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.652000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.652574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.664 [2024-11-05 16:59:25.652592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.664 [2024-11-05 16:59:25.652600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.664 [2024-11-05 16:59:25.652823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.664 [2024-11-05 16:59:25.653046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.664 [2024-11-05 16:59:25.653055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.664 [2024-11-05 16:59:25.653062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.664 [2024-11-05 16:59:25.653069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.664 [2024-11-05 16:59:25.665833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.664 [2024-11-05 16:59:25.666391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.665 [2024-11-05 16:59:25.666408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.665 [2024-11-05 16:59:25.666416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.665 [2024-11-05 16:59:25.666635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.665 [2024-11-05 16:59:25.666862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.665 [2024-11-05 16:59:25.666873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.665 [2024-11-05 16:59:25.666880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.665 [2024-11-05 16:59:25.666889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.665 [2024-11-05 16:59:25.679642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.665 [2024-11-05 16:59:25.680306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.665 [2024-11-05 16:59:25.680345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.665 [2024-11-05 16:59:25.680356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.665 [2024-11-05 16:59:25.680595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.665 [2024-11-05 16:59:25.680833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.665 [2024-11-05 16:59:25.680844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.665 [2024-11-05 16:59:25.680852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.665 [2024-11-05 16:59:25.680860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.665 [2024-11-05 16:59:25.693624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.665 [2024-11-05 16:59:25.694187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.665 [2024-11-05 16:59:25.694208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.665 [2024-11-05 16:59:25.694216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.665 [2024-11-05 16:59:25.694437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.665 [2024-11-05 16:59:25.694657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.665 [2024-11-05 16:59:25.694666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.665 [2024-11-05 16:59:25.694674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.665 [2024-11-05 16:59:25.694681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.665 [2024-11-05 16:59:25.707453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.665 [2024-11-05 16:59:25.708082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.665 [2024-11-05 16:59:25.708121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.665 [2024-11-05 16:59:25.708134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.665 [2024-11-05 16:59:25.708373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.665 [2024-11-05 16:59:25.708597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.665 [2024-11-05 16:59:25.708607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.665 [2024-11-05 16:59:25.708615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.665 [2024-11-05 16:59:25.708623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.665 [2024-11-05 16:59:25.721395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.665 [2024-11-05 16:59:25.722049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.665 [2024-11-05 16:59:25.722089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.665 [2024-11-05 16:59:25.722100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.665 [2024-11-05 16:59:25.722338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.665 [2024-11-05 16:59:25.722562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.665 [2024-11-05 16:59:25.722572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.665 [2024-11-05 16:59:25.722585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.665 [2024-11-05 16:59:25.722594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.926 [2024-11-05 16:59:25.735361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.926 [2024-11-05 16:59:25.735790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.926 [2024-11-05 16:59:25.735817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.926 [2024-11-05 16:59:25.735826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.926 [2024-11-05 16:59:25.736052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.926 [2024-11-05 16:59:25.736273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.926 [2024-11-05 16:59:25.736282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.926 [2024-11-05 16:59:25.736290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.926 [2024-11-05 16:59:25.736297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.926 [2024-11-05 16:59:25.749272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.926 [2024-11-05 16:59:25.749968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.926 [2024-11-05 16:59:25.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.926 [2024-11-05 16:59:25.750018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.926 [2024-11-05 16:59:25.750257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.926 [2024-11-05 16:59:25.750480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.750490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.750498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.750506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.763069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.763755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.763793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.763805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.764043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.764278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.764289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.764297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.764305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.776872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.777537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.777576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.777587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.777833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.778057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.778067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.778075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.778083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.790843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.791515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.791553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.791564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.791811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.792035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.792046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.792054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.792063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.804833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.805514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.805553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.805564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.805812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.806036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.806046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.806054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.806062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.818648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.819327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.819366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.819381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.819620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.819852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.819863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.819871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.819879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.832636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.833194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.833215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.833223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.833443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.833663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.833672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.833679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.833686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.846444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.846996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.847014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.847022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.847241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.847461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.847471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.847478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.847485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.860244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.860846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.860885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.860897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.861139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.861368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.861377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.861385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.861393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.874171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.874867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.874906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.874917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.875155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.875379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.927 [2024-11-05 16:59:25.875390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.927 [2024-11-05 16:59:25.875398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.927 [2024-11-05 16:59:25.875407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.927 [2024-11-05 16:59:25.887977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.927 [2024-11-05 16:59:25.888545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.927 [2024-11-05 16:59:25.888565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.927 [2024-11-05 16:59:25.888574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.927 [2024-11-05 16:59:25.888801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.927 [2024-11-05 16:59:25.889022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.889032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.889039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.889048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.901805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.902365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.902382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.902390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.902609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.902834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.902844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.902856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.902863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.915631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.916296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.916335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.916346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.916584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.916813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.916823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.916831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.916839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.929697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.930378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.930417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.930428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.930666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.930896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.930907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.930914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.930923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.943685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.944262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.944283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.944291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.944510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.944730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.944740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.944753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.944760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.957528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.958152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.958190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.958201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.958440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.958664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.958674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.958682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.958690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.971465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.972004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.972025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.972033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.972253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.972473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.972483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.972490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.972498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:18.928 [2024-11-05 16:59:25.985262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:18.928 [2024-11-05 16:59:25.985854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.928 [2024-11-05 16:59:25.985892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:18.928 [2024-11-05 16:59:25.985904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:18.928 [2024-11-05 16:59:25.986146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:18.928 [2024-11-05 16:59:25.986370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:18.928 [2024-11-05 16:59:25.986380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:18.928 [2024-11-05 16:59:25.986388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:18.928 [2024-11-05 16:59:25.986396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:25.999166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:25.999589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:25.999610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:25.999623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:25.999849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.000070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.000079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.000087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.000094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.013072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.013704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.013743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.013763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:26.014003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.014226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.014236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.014244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.014252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.027016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.027585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.027606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.027614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:26.027838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.028059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.028071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.028078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.028085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.041033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.041580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.041598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.041606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:26.041831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.042057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.042069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.042076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.042083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.054839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.055398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.055415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.055423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:26.055641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.055867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.055877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.055885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.055892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.068688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.069221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.069238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.069246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:26.069465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.069685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.069694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.069701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.069708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.082663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.083209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.083226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.083234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.190 [2024-11-05 16:59:26.083453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.190 [2024-11-05 16:59:26.083672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.190 [2024-11-05 16:59:26.083682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.190 [2024-11-05 16:59:26.083693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.190 [2024-11-05 16:59:26.083701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.190 [2024-11-05 16:59:26.096454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.190 [2024-11-05 16:59:26.097070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.190 [2024-11-05 16:59:26.097109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.190 [2024-11-05 16:59:26.097121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.097359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.097584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.097594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.097602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.097610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.110396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.111052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.111091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.111102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.111340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.111564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.111575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.111583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.111591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.124357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.125087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.125126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.125137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.125375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.125598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.125608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.125617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.125625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.138211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.138872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.138912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.138925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.139167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.139391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.139402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.139410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.139418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.152189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.152840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.152879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.152891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.153134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.153358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.153368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.153376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.153384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.166162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.166740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.166765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.166774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.166994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.167214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.167223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.167231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.167237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.179990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.180560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.180578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.180591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.180816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.181036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.181046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.181053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.181060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.193807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.194218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.194235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.194243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.194462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.194681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.194691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.194698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.194705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.207730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.208355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.208394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.208405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.208643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.208882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.208893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.208901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.208909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 5358.40 IOPS, 20.93 MiB/s [2024-11-05T15:59:26.254Z] [2024-11-05 16:59:26.221646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.191 [2024-11-05 16:59:26.222304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.191 [2024-11-05 16:59:26.222343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.191 [2024-11-05 16:59:26.222354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.191 [2024-11-05 16:59:26.222592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.191 [2024-11-05 16:59:26.222828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.191 [2024-11-05 16:59:26.222839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.191 [2024-11-05 16:59:26.222846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.191 [2024-11-05 16:59:26.222854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.191 [2024-11-05 16:59:26.235628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.192 [2024-11-05 16:59:26.236254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.192 [2024-11-05 16:59:26.236293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.192 [2024-11-05 16:59:26.236303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.192 [2024-11-05 16:59:26.236542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.192 [2024-11-05 16:59:26.236776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.192 [2024-11-05 16:59:26.236787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.192 [2024-11-05 16:59:26.236795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.192 [2024-11-05 16:59:26.236803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.192 [2024-11-05 16:59:26.249575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.192 [2024-11-05 16:59:26.250135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.192 [2024-11-05 16:59:26.250156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.192 [2024-11-05 16:59:26.250164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.192 [2024-11-05 16:59:26.250384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.192 [2024-11-05 16:59:26.250604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.192 [2024-11-05 16:59:26.250613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.192 [2024-11-05 16:59:26.250620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.192 [2024-11-05 16:59:26.250627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.453 [2024-11-05 16:59:26.263397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.453 [2024-11-05 16:59:26.264052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.453 [2024-11-05 16:59:26.264091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.453 [2024-11-05 16:59:26.264102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.453 [2024-11-05 16:59:26.264341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.453 [2024-11-05 16:59:26.264565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.453 [2024-11-05 16:59:26.264575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.453 [2024-11-05 16:59:26.264591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.453 [2024-11-05 16:59:26.264600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.453 [2024-11-05 16:59:26.277384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.453 [2024-11-05 16:59:26.278038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.453 [2024-11-05 16:59:26.278077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.453 [2024-11-05 16:59:26.278088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.453 [2024-11-05 16:59:26.278327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.453 [2024-11-05 16:59:26.278550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.453 [2024-11-05 16:59:26.278560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.453 [2024-11-05 16:59:26.278568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.453 [2024-11-05 16:59:26.278576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.453 [2024-11-05 16:59:26.291346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.453 [2024-11-05 16:59:26.292031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.453 [2024-11-05 16:59:26.292070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.453 [2024-11-05 16:59:26.292081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.453 [2024-11-05 16:59:26.292319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.453 [2024-11-05 16:59:26.292543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.453 [2024-11-05 16:59:26.292553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.453 [2024-11-05 16:59:26.292561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.453 [2024-11-05 16:59:26.292569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.453 [2024-11-05 16:59:26.305339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.453 [2024-11-05 16:59:26.305881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.453 [2024-11-05 16:59:26.305901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.453 [2024-11-05 16:59:26.305909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.453 [2024-11-05 16:59:26.306130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.453 [2024-11-05 16:59:26.306350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.453 [2024-11-05 16:59:26.306359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.453 [2024-11-05 16:59:26.306366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.453 [2024-11-05 16:59:26.306373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.453 [2024-11-05 16:59:26.319147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.453 [2024-11-05 16:59:26.319664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.453 [2024-11-05 16:59:26.319681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.453 [2024-11-05 16:59:26.319689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.453 [2024-11-05 16:59:26.319913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.453 [2024-11-05 16:59:26.320133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.453 [2024-11-05 16:59:26.320142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.453 [2024-11-05 16:59:26.320149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.320156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.333117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.333681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.333698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.333706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.333930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.334150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.334159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.334167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.334173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.346931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.347494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.347510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.347518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.347737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.347962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.347972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.347979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.347986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.360731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.361386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.361425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.361440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.361678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.361911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.361921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.361929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.361937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.374711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.375300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.375338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.375349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.375587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.375819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.375831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.375839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.375847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.388606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.389350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.389388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.389399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.389638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.389870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.389880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.389889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.389897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.402444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.403075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.403114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.403125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.403363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.403591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.403601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.403609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.403617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.416395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.416877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.416916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.416928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.417168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.417392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.417402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.417410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.417418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.430393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.431045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.431084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.431095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.431333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.431557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.431566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.431574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.431582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.444346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.445047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.445085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.445096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.445334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.445558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.445568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.454 [2024-11-05 16:59:26.445582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.454 [2024-11-05 16:59:26.445590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.454 [2024-11-05 16:59:26.458187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.454 [2024-11-05 16:59:26.458863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.454 [2024-11-05 16:59:26.458901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.454 [2024-11-05 16:59:26.458913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.454 [2024-11-05 16:59:26.459153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.454 [2024-11-05 16:59:26.459377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.454 [2024-11-05 16:59:26.459387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.455 [2024-11-05 16:59:26.459395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.455 [2024-11-05 16:59:26.459403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.455 [2024-11-05 16:59:26.472184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.455 [2024-11-05 16:59:26.472849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.455 [2024-11-05 16:59:26.472888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.455 [2024-11-05 16:59:26.472901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.455 [2024-11-05 16:59:26.473140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.455 [2024-11-05 16:59:26.473364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.455 [2024-11-05 16:59:26.473374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.455 [2024-11-05 16:59:26.473382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.455 [2024-11-05 16:59:26.473390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.455 [2024-11-05 16:59:26.486156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.455 [2024-11-05 16:59:26.486837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.455 [2024-11-05 16:59:26.486877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.455 [2024-11-05 16:59:26.486889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.455 [2024-11-05 16:59:26.487128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.455 [2024-11-05 16:59:26.487352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.455 [2024-11-05 16:59:26.487362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.455 [2024-11-05 16:59:26.487370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.455 [2024-11-05 16:59:26.487378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.455 [2024-11-05 16:59:26.500150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.455 [2024-11-05 16:59:26.500834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.455 [2024-11-05 16:59:26.500873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.455 [2024-11-05 16:59:26.500886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.455 [2024-11-05 16:59:26.501127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.455 [2024-11-05 16:59:26.501352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.455 [2024-11-05 16:59:26.501361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.455 [2024-11-05 16:59:26.501369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.455 [2024-11-05 16:59:26.501377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.455 [2024-11-05 16:59:26.514153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.455 [2024-11-05 16:59:26.514831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.455 [2024-11-05 16:59:26.514869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.455 [2024-11-05 16:59:26.514880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.455 [2024-11-05 16:59:26.515119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.455 [2024-11-05 16:59:26.515342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.455 [2024-11-05 16:59:26.515352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.455 [2024-11-05 16:59:26.515360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.455 [2024-11-05 16:59:26.515368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.716 [2024-11-05 16:59:26.528137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.716 [2024-11-05 16:59:26.528822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.716 [2024-11-05 16:59:26.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.716 [2024-11-05 16:59:26.528872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.716 [2024-11-05 16:59:26.529110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.716 [2024-11-05 16:59:26.529334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.716 [2024-11-05 16:59:26.529344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.716 [2024-11-05 16:59:26.529352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.716 [2024-11-05 16:59:26.529360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.716 [2024-11-05 16:59:26.542127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.716 [2024-11-05 16:59:26.542801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.716 [2024-11-05 16:59:26.542840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.716 [2024-11-05 16:59:26.542856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.716 [2024-11-05 16:59:26.543096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.716 [2024-11-05 16:59:26.543320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.716 [2024-11-05 16:59:26.543329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.716 [2024-11-05 16:59:26.543337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.716 [2024-11-05 16:59:26.543345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.716 [2024-11-05 16:59:26.556111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.716 [2024-11-05 16:59:26.556777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.716 [2024-11-05 16:59:26.556816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.716 [2024-11-05 16:59:26.556829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.557071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.557295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.557304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.557312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.557320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.570098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.570635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.570655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.570663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.570890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.571111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.571120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.571128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.571135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.583896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.584452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.584470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.584477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.584696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.584926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.584936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.584943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.584950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.597702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.598277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.598294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.598302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.598521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.598743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.598758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.598765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.598772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.611524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.612094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.612111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.612118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.612337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.612558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.612566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.612574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.612580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.625327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.625982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.626021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.626032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.626270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.626494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.626504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.626516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.626524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.639299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.639861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.639900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.639911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.640149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.640373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.640383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.640391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.640399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.653161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.653798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.653837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.653848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.654089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.654313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.654323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.654331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.654339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.667110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.667781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.667820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.667831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.668080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.668305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.668315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.668323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.668331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.681097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.681770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.717 [2024-11-05 16:59:26.681808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.717 [2024-11-05 16:59:26.681821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.717 [2024-11-05 16:59:26.682061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.717 [2024-11-05 16:59:26.682284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.717 [2024-11-05 16:59:26.682294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.717 [2024-11-05 16:59:26.682301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.717 [2024-11-05 16:59:26.682309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.717 [2024-11-05 16:59:26.695086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.717 [2024-11-05 16:59:26.695743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.695788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.695798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.696037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.718 [2024-11-05 16:59:26.696261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.718 [2024-11-05 16:59:26.696271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.718 [2024-11-05 16:59:26.696278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.718 [2024-11-05 16:59:26.696286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.718 [2024-11-05 16:59:26.709054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.718 [2024-11-05 16:59:26.709725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.709770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.709782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.710020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.718 [2024-11-05 16:59:26.710244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.718 [2024-11-05 16:59:26.710253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.718 [2024-11-05 16:59:26.710261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.718 [2024-11-05 16:59:26.710269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.718 [2024-11-05 16:59:26.723031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.718 [2024-11-05 16:59:26.723701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.723739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.723765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.724003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.718 [2024-11-05 16:59:26.724227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.718 [2024-11-05 16:59:26.724237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.718 [2024-11-05 16:59:26.724245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.718 [2024-11-05 16:59:26.724253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.718 [2024-11-05 16:59:26.737012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.718 [2024-11-05 16:59:26.737693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.737731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.737744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.737993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.718 [2024-11-05 16:59:26.738217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.718 [2024-11-05 16:59:26.738227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.718 [2024-11-05 16:59:26.738234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.718 [2024-11-05 16:59:26.738243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.718 [2024-11-05 16:59:26.751005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.718 [2024-11-05 16:59:26.751542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.751563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.751571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.751798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.718 [2024-11-05 16:59:26.752019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.718 [2024-11-05 16:59:26.752028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.718 [2024-11-05 16:59:26.752036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.718 [2024-11-05 16:59:26.752043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.718 [2024-11-05 16:59:26.764797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.718 [2024-11-05 16:59:26.765363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.765381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.765389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.765608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.718 [2024-11-05 16:59:26.765838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.718 [2024-11-05 16:59:26.765848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.718 [2024-11-05 16:59:26.765856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.718 [2024-11-05 16:59:26.765863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.718 [2024-11-05 16:59:26.778625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.718 [2024-11-05 16:59:26.779164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.718 [2024-11-05 16:59:26.779182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.718 [2024-11-05 16:59:26.779190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.718 [2024-11-05 16:59:26.779409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.979 [2024-11-05 16:59:26.779629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.979 [2024-11-05 16:59:26.779640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.979 [2024-11-05 16:59:26.779648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.779655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.792415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.792976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.792983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.793202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.793422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.793431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.793438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.793445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.806406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.807121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.807160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.807171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.807410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.807634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.807643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.807656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.807664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.820229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.820787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.820795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.821015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.821235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.821245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.821253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.821260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.834019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.834546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.834563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.834571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.834797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.835018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.835027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.835035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.835041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.847998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.848735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.848756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.848995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.849219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.849229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.849237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.849245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3360366 Killed "${NVMF_APP[@]}" "$@" 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:19.980 [2024-11-05 16:59:26.861809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.980 [2024-11-05 16:59:26.862481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.862519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.862530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.862777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.863002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.863011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.863019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.863028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=3362063 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 3362063 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3362063 ']' 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:19.980 16:59:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.980 [2024-11-05 16:59:26.875806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.876467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.876506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.876519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.876766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.876991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.877002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.877010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.877024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.889797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.980 [2024-11-05 16:59:26.890477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.980 [2024-11-05 16:59:26.890488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.980 [2024-11-05 16:59:26.890727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.980 [2024-11-05 16:59:26.890959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.980 [2024-11-05 16:59:26.890970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.980 [2024-11-05 16:59:26.890978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.980 [2024-11-05 16:59:26.890986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.980 [2024-11-05 16:59:26.903740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.980 [2024-11-05 16:59:26.904458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.904497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.904508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.904752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.904976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.904985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.904993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.905001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:26.917565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:26.918241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.918280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.918291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.918529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.918760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.918771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.918779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.918787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:26.926707] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:35:19.981 [2024-11-05 16:59:26.926763] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.981 [2024-11-05 16:59:26.931549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:26.932237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.932276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.932287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.932526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.932759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.932770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.932778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.932787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:26.945553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:26.946059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.946079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.946088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.946308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.946528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.946537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.946544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.946551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:26.959610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:26.960154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.960172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.960180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.960400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.960621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.960630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.960637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.960644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:26.973421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:26.973947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.973965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.973973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.974193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.974413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.974423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.974430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.974437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:26.987398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:26.987952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:26.987990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:26.988002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:26.988243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:26.988466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:26.988475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:26.988484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:26.988492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:27.001261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:27.001863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:27.001902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:27.001914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:27.002156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:27.002380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:27.002389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:27.002397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:27.002405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:27.015177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:27.015851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:27.015890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:27.015907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:27.016148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:27.016287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:19.981 [2024-11-05 16:59:27.016372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.981 [2024-11-05 16:59:27.016381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.981 [2024-11-05 16:59:27.016389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.981 [2024-11-05 16:59:27.016397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:19.981 [2024-11-05 16:59:27.029178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:19.981 [2024-11-05 16:59:27.029864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:19.981 [2024-11-05 16:59:27.029904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:19.981 [2024-11-05 16:59:27.029917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:19.981 [2024-11-05 16:59:27.030157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:19.981 [2024-11-05 16:59:27.030381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:19.982 [2024-11-05 16:59:27.030391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:19.982 [2024-11-05 16:59:27.030399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:19.982 [2024-11-05 16:59:27.030408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.243 [2024-11-05 16:59:27.043191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.243 [2024-11-05 16:59:27.043961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.243 [2024-11-05 16:59:27.043999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.243 [2024-11-05 16:59:27.044011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.243 [2024-11-05 16:59:27.044250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.243 [2024-11-05 16:59:27.044475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.243 [2024-11-05 16:59:27.044486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.243 [2024-11-05 16:59:27.044494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.243 [2024-11-05 16:59:27.044503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.243 [2024-11-05 16:59:27.045855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.243 [2024-11-05 16:59:27.045876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.243 [2024-11-05 16:59:27.045883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.243 [2024-11-05 16:59:27.045888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.243 [2024-11-05 16:59:27.045893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.243 [2024-11-05 16:59:27.046992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:20.243 [2024-11-05 16:59:27.047145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.243 [2024-11-05 16:59:27.047148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:20.243 [2024-11-05 16:59:27.057068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.243 [2024-11-05 16:59:27.057801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.243 [2024-11-05 16:59:27.057841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.243 [2024-11-05 16:59:27.057854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.243 [2024-11-05 16:59:27.058097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.243 [2024-11-05 16:59:27.058321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.243 [2024-11-05 16:59:27.058331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.243 [2024-11-05 16:59:27.058340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.243 [2024-11-05 16:59:27.058348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.243 [2024-11-05 16:59:27.070925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.243 [2024-11-05 16:59:27.071390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.243 [2024-11-05 16:59:27.071411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.243 [2024-11-05 16:59:27.071420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.243 [2024-11-05 16:59:27.071640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.071866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.071877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.071884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.071892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.084862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.085450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.085467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.085476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.085696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.085922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.085932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.085940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.085947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.098799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.099491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.099532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.099543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.099791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.100016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.100026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.100034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.100042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.112606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.113275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.113314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.113325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.113564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.113796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.113807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.113815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.113823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.126581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.127265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.127304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.127315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.127553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.127785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.127795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.127803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.127811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.140572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.141176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.141196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.141209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.141429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.141650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.141659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.141666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.141673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.154432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.155081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.155120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.155131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.155370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.155594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.155604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.155612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.155620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.168393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.169064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.169104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.169116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.169356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.169595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.169606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.169614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.169622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.182387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.183039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.183078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.183089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.183327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.183555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.183565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.183574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.183582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.196352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.196860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.244 [2024-11-05 16:59:27.196898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.244 [2024-11-05 16:59:27.196911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.244 [2024-11-05 16:59:27.197153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.244 [2024-11-05 16:59:27.197376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.244 [2024-11-05 16:59:27.197385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.244 [2024-11-05 16:59:27.197393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.244 [2024-11-05 16:59:27.197401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.244 [2024-11-05 16:59:27.210183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.244 [2024-11-05 16:59:27.210794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.210829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.210838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.211063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.211290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.211299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.211306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.211313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.245 4465.33 IOPS, 17.44 MiB/s [2024-11-05T15:59:27.308Z] [2024-11-05 16:59:27.224106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.245 [2024-11-05 16:59:27.224663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.224702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.224714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.224966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.225190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.225199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.225211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.225219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.245 [2024-11-05 16:59:27.237986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.245 [2024-11-05 16:59:27.238673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.238711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.238723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.238973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.239197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.239206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.239214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.239222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.245 [2024-11-05 16:59:27.251989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.245 [2024-11-05 16:59:27.252678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.252715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.252726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.252972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.253197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.253206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.253214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.253222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.245 [2024-11-05 16:59:27.265985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.245 [2024-11-05 16:59:27.266694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.266732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.266743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.266991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.267214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.267223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.267231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.267239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.245 [2024-11-05 16:59:27.279813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.245 [2024-11-05 16:59:27.280485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.280523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.280534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.280782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.281007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.281015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.281022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.281030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.245 [2024-11-05 16:59:27.293800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.245 [2024-11-05 16:59:27.294465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.245 [2024-11-05 16:59:27.294503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.245 [2024-11-05 16:59:27.294513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.245 [2024-11-05 16:59:27.294759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.245 [2024-11-05 16:59:27.294983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.245 [2024-11-05 16:59:27.294992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.245 [2024-11-05 16:59:27.294999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.245 [2024-11-05 16:59:27.295007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.507 [2024-11-05 16:59:27.307798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.507 [2024-11-05 16:59:27.308245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.507 [2024-11-05 16:59:27.308265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.507 [2024-11-05 16:59:27.308273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.507 [2024-11-05 16:59:27.308493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.507 [2024-11-05 16:59:27.308712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.507 [2024-11-05 16:59:27.308721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.507 [2024-11-05 16:59:27.308728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.507 [2024-11-05 16:59:27.308735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.507 [2024-11-05 16:59:27.321714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.507 [2024-11-05 16:59:27.322140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.507 [2024-11-05 16:59:27.322157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.507 [2024-11-05 16:59:27.322169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.507 [2024-11-05 16:59:27.322388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.322608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.322617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.322624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.322631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.335598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.336143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.336160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.336168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.336387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.336605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.336614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.336621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.336629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.349388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.349816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.349833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.349840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.350060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.350279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.350287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.350294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.350300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.363267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.363835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.363852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.363860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.364079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.364302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.364311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.364318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.364325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.377096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.377757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.377795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.377806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.378045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.378268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.378277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.378285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.378293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.391066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.391714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.391762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.391773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.392011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.392234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.392242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.392250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.392258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.405026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.405719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.405764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.405775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.406014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.406237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.406246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.406258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.406266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.418842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.419279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.419299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.419307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.419527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.419753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.419763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.419770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.419777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.432748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.433275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.433313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.433324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.433563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.433793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.433803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.433810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.433818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.446588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.447281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.447293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.447533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.447763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.508 [2024-11-05 16:59:27.447773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.508 [2024-11-05 16:59:27.447781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.508 [2024-11-05 16:59:27.447789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.508 [2024-11-05 16:59:27.460568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.508 [2024-11-05 16:59:27.461168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.508 [2024-11-05 16:59:27.461188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.508 [2024-11-05 16:59:27.461196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.508 [2024-11-05 16:59:27.461415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.508 [2024-11-05 16:59:27.461634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.461642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.461649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.461656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.474435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.475136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.475174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.475185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.475424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.475647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.475655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.475663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.475671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.488235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.488865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.488903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.488915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.489155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.489378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.489387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.489394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.489402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.502173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.502848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.502887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.502903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.503145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.503368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.503377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.503385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.503393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.515977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.516675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.516713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.516724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.516970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.517194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.517202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.517210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.517218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.529982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.530693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.530730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.530741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.530988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.531211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.531220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.531228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.531235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.543789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.544230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.544249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.544257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.544476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.544700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.544709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.544717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.544723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.509 [2024-11-05 16:59:27.557698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.509 [2024-11-05 16:59:27.558331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.509 [2024-11-05 16:59:27.558370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.509 [2024-11-05 16:59:27.558381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.509 [2024-11-05 16:59:27.558619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.509 [2024-11-05 16:59:27.558850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.509 [2024-11-05 16:59:27.558860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.509 [2024-11-05 16:59:27.558867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.509 [2024-11-05 16:59:27.558875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.771 [2024-11-05 16:59:27.571650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.771 [2024-11-05 16:59:27.572306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.771 [2024-11-05 16:59:27.572343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.771 [2024-11-05 16:59:27.572354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.771 [2024-11-05 16:59:27.572592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.771 [2024-11-05 16:59:27.572823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.771 [2024-11-05 16:59:27.572833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.771 [2024-11-05 16:59:27.572841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.771 [2024-11-05 16:59:27.572849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.771 [2024-11-05 16:59:27.585614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.771 [2024-11-05 16:59:27.586049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.771 [2024-11-05 16:59:27.586069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.771 [2024-11-05 16:59:27.586077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.771 [2024-11-05 16:59:27.586297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.771 [2024-11-05 16:59:27.586516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.771 [2024-11-05 16:59:27.586525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.771 [2024-11-05 16:59:27.586537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.771 [2024-11-05 16:59:27.586544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.771 [2024-11-05 16:59:27.599514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.771 [2024-11-05 16:59:27.600098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.771 [2024-11-05 16:59:27.600115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.771 [2024-11-05 16:59:27.600123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.771 [2024-11-05 16:59:27.600342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.771 [2024-11-05 16:59:27.600562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.771 [2024-11-05 16:59:27.600571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.771 [2024-11-05 16:59:27.600578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.771 [2024-11-05 16:59:27.600585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.771 [2024-11-05 16:59:27.613363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.771 [2024-11-05 16:59:27.614061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.771 [2024-11-05 16:59:27.614100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.771 [2024-11-05 16:59:27.614110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.771 [2024-11-05 16:59:27.614349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.771 [2024-11-05 16:59:27.614572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.771 [2024-11-05 16:59:27.614581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.771 [2024-11-05 16:59:27.614588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.771 [2024-11-05 16:59:27.614596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.771 [2024-11-05 16:59:27.627168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.771 [2024-11-05 16:59:27.627818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.771 [2024-11-05 16:59:27.627856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.771 [2024-11-05 16:59:27.627868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.771 [2024-11-05 16:59:27.628111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.771 [2024-11-05 16:59:27.628334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.771 [2024-11-05 16:59:27.628343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.771 [2024-11-05 16:59:27.628350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.771 [2024-11-05 16:59:27.628359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.771 [2024-11-05 16:59:27.641139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.771 [2024-11-05 16:59:27.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.771 [2024-11-05 16:59:27.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.771 [2024-11-05 16:59:27.641770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.641990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.642210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.642219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.642227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.642234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.654989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.655524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.655562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.655575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.655824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.656049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.656058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.656066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.656074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.668845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.669531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.669569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.669580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.669827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.670052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.670061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.670069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.670077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.682656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.683347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.683385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.683405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.683643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.683875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.683885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.683892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.683900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.696454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.697152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.697190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.697201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.697439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.697662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.697671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.697679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.697686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.710465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.711118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.711156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.711166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.711405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.711627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.711636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.711644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.711652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 [2024-11-05 16:59:27.724420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.725125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.725168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.725179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.725418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.725642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.725651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.725659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.725667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.738225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.738668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.738689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.738697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.738922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.739142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.739150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.739158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.739165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 [2024-11-05 16:59:27.752130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.752821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.752860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.752871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.753109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.772 [2024-11-05 16:59:27.753332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.772 [2024-11-05 16:59:27.753342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.772 [2024-11-05 16:59:27.753349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.772 [2024-11-05 16:59:27.753357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 [2024-11-05 16:59:27.765929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.772 [2024-11-05 16:59:27.766533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.772 [2024-11-05 16:59:27.766552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.772 [2024-11-05 16:59:27.766560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.772 [2024-11-05 16:59:27.766786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.773 [2024-11-05 16:59:27.766843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.773 [2024-11-05 16:59:27.767006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.773 [2024-11-05 16:59:27.767015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.773 [2024-11-05 16:59:27.767022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.773 [2024-11-05 16:59:27.767028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.773 [2024-11-05 16:59:27.779794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.773 [2024-11-05 16:59:27.780233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.773 [2024-11-05 16:59:27.780250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.773 [2024-11-05 16:59:27.780258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.773 [2024-11-05 16:59:27.780478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.773 [2024-11-05 16:59:27.780697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.773 [2024-11-05 16:59:27.780705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.773 [2024-11-05 16:59:27.780713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.773 [2024-11-05 16:59:27.780720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.773 [2024-11-05 16:59:27.793681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.773 [2024-11-05 16:59:27.794237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.773 [2024-11-05 16:59:27.794253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.773 [2024-11-05 16:59:27.794261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.773 [2024-11-05 16:59:27.794480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.773 [2024-11-05 16:59:27.794698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.773 [2024-11-05 16:59:27.794707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.773 [2024-11-05 16:59:27.794715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.773 [2024-11-05 16:59:27.794722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.773 Malloc0 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.773 [2024-11-05 16:59:27.807535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.773 [2024-11-05 16:59:27.808101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.773 [2024-11-05 16:59:27.808118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.773 [2024-11-05 16:59:27.808126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.773 [2024-11-05 16:59:27.808345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.773 [2024-11-05 16:59:27.808563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.773 [2024-11-05 16:59:27.808572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.773 [2024-11-05 16:59:27.808579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.773 [2024-11-05 16:59:27.808586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.773 [2024-11-05 16:59:27.821350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:20.773 [2024-11-05 16:59:27.821792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:20.773 [2024-11-05 16:59:27.821813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1193000 with addr=10.0.0.2, port=4420 00:35:20.773 [2024-11-05 16:59:27.821820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193000 is same with the state(6) to be set 00:35:20.773 [2024-11-05 16:59:27.822042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1193000 (9): Bad file descriptor 00:35:20.773 [2024-11-05 16:59:27.822262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:20.773 [2024-11-05 16:59:27.822270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:20.773 [2024-11-05 16:59:27.822277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:20.773 [2024-11-05 16:59:27.822284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.773 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:20.773 [2024-11-05 16:59:27.830327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.034 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 [2024-11-05 16:59:27.835262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:21.034 16:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3361002 00:35:21.034 [2024-11-05 16:59:27.859350] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:22.235 4451.86 IOPS, 17.39 MiB/s [2024-11-05T15:59:30.238Z] 5412.00 IOPS, 21.14 MiB/s [2024-11-05T15:59:31.621Z] 6048.44 IOPS, 23.63 MiB/s [2024-11-05T15:59:32.562Z] 6572.30 IOPS, 25.67 MiB/s [2024-11-05T15:59:33.503Z] 6998.91 IOPS, 27.34 MiB/s [2024-11-05T15:59:34.443Z] 7367.67 IOPS, 28.78 MiB/s [2024-11-05T15:59:35.382Z] 7675.69 IOPS, 29.98 MiB/s [2024-11-05T15:59:36.322Z] 7951.00 IOPS, 31.06 MiB/s 00:35:29.259 Latency(us) 00:35:29.259 [2024-11-05T15:59:36.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.259 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:29.259 Verification LBA range: start 0x0 length 0x4000 00:35:29.259 Nvme1n1 : 15.00 8180.32 31.95 9756.69 0.00 7110.43 525.65 14417.92 00:35:29.259 [2024-11-05T15:59:36.322Z] =================================================================================================================== 00:35:29.259 [2024-11-05T15:59:36.322Z] Total : 8180.32 31.95 9756.69 0.00 7110.43 525.65 14417.92 00:35:29.519 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:29.519 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:29.519 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.519 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.519 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:29.520 rmmod nvme_tcp 00:35:29.520 rmmod nvme_fabrics 00:35:29.520 rmmod nvme_keyring 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 3362063 ']' 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 3362063 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3362063 ']' 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3362063 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3362063 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3362063' 00:35:29.520 killing process with pid 3362063 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3362063 00:35:29.520 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3362063 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@254 -- # local dev 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:29.780 16:59:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # return 0 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@274 -- # iptr 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-save 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-restore 00:35:31.691 00:35:31.691 real 0m27.713s 00:35:31.691 user 1m2.591s 00:35:31.691 sys 0m7.221s 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:31.691 16:59:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.691 ************************************ 00:35:31.691 END TEST nvmf_bdevperf 00:35:31.691 ************************************ 00:35:31.951 16:59:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.952 ************************************ 00:35:31.952 START TEST nvmf_target_disconnect 00:35:31.952 ************************************ 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:31.952 * Looking for test storage... 00:35:31.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:31.952 16:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.952 --rc genhtml_branch_coverage=1 00:35:31.952 --rc genhtml_function_coverage=1 00:35:31.952 --rc genhtml_legend=1 00:35:31.952 --rc geninfo_all_blocks=1 00:35:31.952 --rc geninfo_unexecuted_blocks=1 00:35:31.952 00:35:31.952 ' 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.952 --rc genhtml_branch_coverage=1 00:35:31.952 --rc genhtml_function_coverage=1 00:35:31.952 --rc genhtml_legend=1 00:35:31.952 --rc geninfo_all_blocks=1 00:35:31.952 --rc geninfo_unexecuted_blocks=1 00:35:31.952 00:35:31.952 ' 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.952 --rc genhtml_branch_coverage=1 00:35:31.952 --rc genhtml_function_coverage=1 00:35:31.952 --rc genhtml_legend=1 00:35:31.952 --rc geninfo_all_blocks=1 00:35:31.952 --rc geninfo_unexecuted_blocks=1 00:35:31.952 00:35:31.952 ' 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.952 --rc genhtml_branch_coverage=1 00:35:31.952 --rc genhtml_function_coverage=1 00:35:31.952 --rc genhtml_legend=1 00:35:31.952 --rc geninfo_all_blocks=1 00:35:31.952 --rc geninfo_unexecuted_blocks=1 00:35:31.952 00:35:31.952 ' 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.952 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:32.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:35:32.213 16:59:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.802 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:38.803 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:38.803 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:38.803 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:38.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:35:38.803 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:39.064 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:39.065 16:59:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:39.065 10.0.0.1 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:39.065 10.0.0.2 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:39.065 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.326 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:39.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.696 ms 00:35:39.327 00:35:39.327 --- 10.0.0.1 ping statistics --- 00:35:39.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.327 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:39.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:35:39.327 00:35:39.327 --- 10.0.0.2 ping statistics --- 00:35:39.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.327 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:39.327 ' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:39.327 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:39.589 ************************************ 00:35:39.589 START TEST nvmf_target_disconnect_tc1 00:35:39.589 ************************************ 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:39.589 [2024-11-05 16:59:46.517655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.589 [2024-11-05 16:59:46.517735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df4ad0 with addr=10.0.0.2, port=4420 00:35:39.589 [2024-11-05 16:59:46.517773] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:39.589 [2024-11-05 16:59:46.517785] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:39.589 [2024-11-05 16:59:46.517793] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:39.589 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:39.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:39.589 Initializing NVMe Controllers 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:39.589 00:35:39.589 real 0m0.133s 00:35:39.589 user 0m0.062s 00:35:39.589 sys 0m0.072s 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:39.589 ************************************ 00:35:39.589 END TEST nvmf_target_disconnect_tc1 00:35:39.589 ************************************ 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:39.589 ************************************ 00:35:39.589 START TEST nvmf_target_disconnect_tc2 00:35:39.589 ************************************ 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=3368133 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 3368133 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3368133 ']' 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:39.589 16:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.850 [2024-11-05 16:59:46.677640] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:35:39.850 [2024-11-05 16:59:46.677701] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.850 [2024-11-05 16:59:46.776189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:39.850 [2024-11-05 16:59:46.828582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.850 [2024-11-05 16:59:46.828631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.850 [2024-11-05 16:59:46.828640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.850 [2024-11-05 16:59:46.828647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.850 [2024-11-05 16:59:46.828654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.850 [2024-11-05 16:59:46.830705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:39.850 [2024-11-05 16:59:46.830867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:39.850 [2024-11-05 16:59:46.831267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:39.850 [2024-11-05 16:59:46.831270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:40.792 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.793 Malloc0 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.793 [2024-11-05 16:59:47.597758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.793 [2024-11-05 16:59:47.626093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3368482 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:40.793 16:59:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.708 16:59:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3368133 00:35:42.708 16:59:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Write completed with error (sct=0, sc=8) 00:35:42.708 starting I/O failed 00:35:42.708 Read completed with error (sct=0, sc=8) 00:35:42.709 starting I/O failed 00:35:42.709 Read completed with error (sct=0, sc=8) 00:35:42.709 starting I/O failed 00:35:42.709 Write completed with error (sct=0, sc=8) 00:35:42.709 starting I/O failed 00:35:42.709 Read completed with error (sct=0, sc=8) 00:35:42.709 starting I/O failed 00:35:42.709 [2024-11-05 16:59:49.653363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:42.709 [2024-11-05 16:59:49.653790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.653826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.654292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.654329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.654609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.654621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.654969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.655007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.655364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.655380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.655686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.655704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.656053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.656092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.656429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.656443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.656772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.656785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.657016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.657028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.657366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.657378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.657702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.657714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.657904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.657917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.658253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.658265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.658602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.658614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.658916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.658927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.659234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.659246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.659581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.659593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.659942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.659953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.660251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.660264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.660459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.660470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.660852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.660865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.661180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.661192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.661490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.661502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.661807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.661820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.662128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.662139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.662481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.662493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.662647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.662659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.663012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.663024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.709 [2024-11-05 16:59:49.663354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.709 [2024-11-05 16:59:49.663367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.709 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.663635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.663647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.663980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.663994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.664356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.664368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.664558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.664733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.664751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.665036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.665048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.665380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.665392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.665715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.665726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.665929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.665941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.666294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.666306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.666503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.666515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.666822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.666834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.667123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.667136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.667401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.667413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.667663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.667675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.667998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.668010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.668318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.668623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.668634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.668942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.668953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.669119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.669429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.669764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.669776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.670046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.670056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.670337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.670348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.670661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.670672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.670988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.671008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.671313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.671324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.671630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.671642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.671943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.671955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.672250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.672261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.672595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.672606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.672915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.672927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.673230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.673241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.673511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.710 [2024-11-05 16:59:49.673522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.710 qpair failed and we were unable to recover it. 00:35:42.710 [2024-11-05 16:59:49.673824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.673836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.674003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.674015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.674319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.674331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.674588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.674599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.675005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.675016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.675314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.675324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.675600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.675611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.675796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.675809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.676023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.676041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.676340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.676354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.676728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.676741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.677054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.677067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.677363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.677377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.677712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.677726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.678036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.678050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.678346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.678360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.678663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.678676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.679060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.679074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.679377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.679392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.679722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.679735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.680070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.680084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.680374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.680388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.680577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.680591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.680900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.680913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.681210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.681223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.681508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.681523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.681866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.681880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.682191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.682205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.682552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.682566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.682965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.711 [2024-11-05 16:59:49.682979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.711 qpair failed and we were unable to recover it. 00:35:42.711 [2024-11-05 16:59:49.683351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.683364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.683649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.683662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.683967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.683981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.684282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.684296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.684639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.684654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.684961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.684975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.685273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.685288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.685579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.685592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.685905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.685919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.686212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.686226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.686556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.686569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.686940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.686954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.687251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.687266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.687562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.687576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.687782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.687795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.688067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.688080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.688358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.688371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.688676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.688693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.688966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.688988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.689278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.689296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.689604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.689623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.689961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.689979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.690264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.690282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.690640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.690657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.690991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.691011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.691308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.691326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.691626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.691644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.691942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.691960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.692278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.692297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.692590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.692608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.692913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.692932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.693276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.712 [2024-11-05 16:59:49.693294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.712 qpair failed and we were unable to recover it. 00:35:42.712 [2024-11-05 16:59:49.693596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.693613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.693917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.693934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.694275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.694292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.694590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.694607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.694915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.694933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.695268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.695286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.695574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.695591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.695902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.695920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.696210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.696228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.696563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.696580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.696898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.696917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.697222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.697537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.697555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.697877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.697895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.698212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.698230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.698565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.698586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.698915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.698938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.699292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.699313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.699617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.699639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.699964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.700316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.700339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.700651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.700674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.701003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.701026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.701323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.701345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.701657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.701679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.702009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.702032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.702240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.702269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.702625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.702648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.702985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.703009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.703336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.703358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.703663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.703686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.704011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.713 [2024-11-05 16:59:49.704034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.713 qpair failed and we were unable to recover it. 00:35:42.713 [2024-11-05 16:59:49.704376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.704397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.704701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.704724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.705048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.705072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.705371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.705392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.705698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.705721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.706053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.706077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.706372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.706394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.706709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.706732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.707082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.707105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.707458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.707481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.707797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.707821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.708155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.708178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.708530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.708552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.708921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.708943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.709275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.709298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.709496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.709518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.709880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.709902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.710246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.710276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.710588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.710617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.710962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.710992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.711324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.711352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.711711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.711740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.711984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.712017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.712386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.712415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.712766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.712797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.713178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.713208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.713537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.714 [2024-11-05 16:59:49.713567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.714 qpair failed and we were unable to recover it. 00:35:42.714 [2024-11-05 16:59:49.713903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.713932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.714289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.714318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.714721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.714773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.715132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.715162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.715526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.715556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.715892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.715924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.716267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.716297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.716642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.716677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.716997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.717028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.717377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.717407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.717766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.717796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.718140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.718170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.718435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.718463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.718815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.718844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.719183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.719213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.719537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.719567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.719927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.719956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.720284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.720313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.720548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.720578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.720933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.720962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.721363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.721393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.721737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.721779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.722123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.722152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.722381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.722411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.722784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.722816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.723200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.723229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.723580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.723610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.723955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.723986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.724338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.724367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.724707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.724737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.715 qpair failed and we were unable to recover it. 00:35:42.715 [2024-11-05 16:59:49.725440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.715 [2024-11-05 16:59:49.725483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.725764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.725795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.726007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.726037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.726420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.726449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.726810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.727183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.727213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.727561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.727591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.727951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.727983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.728330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.728360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.728716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.728754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.729095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.729125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.729464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.729494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.729819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.729849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.730178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.730208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.730548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.730577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.730927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.730957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.731367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.731396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.731707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.731743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.732022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.732053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.732385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.732415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.732767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.732799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.733207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.733238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.733577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.733606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.733948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.733977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.734356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.734387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.734753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.734783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.735124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.735154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.735498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.735529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.735948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.735978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.736308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.736337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.736650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.736681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.736990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.737022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.737364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.737394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.737630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.737659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.737995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.716 [2024-11-05 16:59:49.738026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.716 qpair failed and we were unable to recover it. 00:35:42.716 [2024-11-05 16:59:49.738366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.738396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.738757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.738788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.739124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.739155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.739490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.739519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.739870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.739900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.740264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.740293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.740657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.741009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.741040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.741388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.741417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.741761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.741792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.742091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.742120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.742472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.742502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.742862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.742892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.743281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.743311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.743621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.743650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.743986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.744016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.744400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.744431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.744769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.744800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.745153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.745182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.745570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.745915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.745946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.746193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.746225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.746597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.746634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.746999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.747335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.747366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.747693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.747723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.748092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.748121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.748454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.748484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.748869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.748899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.749258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.749287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.749619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.717 [2024-11-05 16:59:49.749649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.717 qpair failed and we were unable to recover it. 00:35:42.717 [2024-11-05 16:59:49.749982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.750013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.750370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.750400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.750741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.750780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.751147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.751176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.751522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.751551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.751905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.751937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.752296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.752325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.752635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.752665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.753000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.753032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.753385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.753414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.753763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.753793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.754175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.754204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.754552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.754582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.754930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.754961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.755303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.755332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.755692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.755721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.756075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.756106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.756336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.756365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.756771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.756802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.757150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.757180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.757523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.757553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.757905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.757935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.758182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.758215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.758548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.758578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.758917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.758947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.759270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.759298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.759628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.759658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.759989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.760020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.760339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.760369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.718 qpair failed and we were unable to recover it. 00:35:42.718 [2024-11-05 16:59:49.760768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.718 [2024-11-05 16:59:49.760800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.761161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.761191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.761536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.761572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.761916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.761947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.762174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.762202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.762560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.762589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.762929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.762960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.763304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.763334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.763649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.763678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.764008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.764038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.764389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.764776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.764807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.765155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.765185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.765538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.765568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.765917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.765947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.719 [2024-11-05 16:59:49.766281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.719 [2024-11-05 16:59:49.766309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.719 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.766716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.766755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.767121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.767151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.767367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.767396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.767777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.767808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.768151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.768181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.768503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.768532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.768872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.768903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.769232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.769262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.769598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.769628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.769960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.769990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.770319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.770349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.770675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.770704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.771074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.771104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.771329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.771358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.771715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.771743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.772068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.994 [2024-11-05 16:59:49.772099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.994 qpair failed and we were unable to recover it. 00:35:42.994 [2024-11-05 16:59:49.772459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.772490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.772712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.772744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.773078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.773108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.773459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.773490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.773824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.773855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.774067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.774096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.774442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.774471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.774854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.774884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.775248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.775278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.775629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.775658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.775999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.776036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.776384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.776414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.776763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.776794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.777137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.777167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.777517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.777548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.777896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.777928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.778269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.778298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.778657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.778687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.779028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.779060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.779401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.779431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.779789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.779819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.780206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.780465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.780494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.780910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.780940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.781285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.781314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.781649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.781680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.782033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.782064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.782390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.782420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.782803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.783179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.783208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.783528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.783558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.783911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.784305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.784335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.784691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.784720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.785105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.785136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.995 [2024-11-05 16:59:49.785431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.995 [2024-11-05 16:59:49.785462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.995 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.785805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.785836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.786193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.786223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.786574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.786603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.786842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.786871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.787215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.787244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.787443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.787471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.787821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.787851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.788196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.788525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.788553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.788911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.788941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.789288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.789317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.789657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.789687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.790051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.790083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.790411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.790441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.790840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.790877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.791250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.791280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.791616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.791647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.791886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.791920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.792291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.792320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.792661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.792691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.793085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.793117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.793493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.793523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.793861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.793892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.794233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.794263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.794619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.794649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.794983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.795014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.795368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.795398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.795743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.795780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.796120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.796150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.796519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.796548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.796896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.796927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.797245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.797275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.797629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.797658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.797968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.798002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.798331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.798360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.798711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.798740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.799101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.996 [2024-11-05 16:59:49.799131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.996 qpair failed and we were unable to recover it. 00:35:42.996 [2024-11-05 16:59:49.799464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.799494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.799863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.799893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.800227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.800256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.800604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.800633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.800991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.801023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.801369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.801398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.801743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.801795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.802123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.802152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.802509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.802538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.802902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.803234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.803263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.803618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.803646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.803998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.804028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.804369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.804399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.804756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.804786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.805123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.805153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.805496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.805525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.805883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.805919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.806249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.806278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.806636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.806667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.806995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.807024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.807365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.807394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.807726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.807765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.808149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.808179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.808525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.808555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.808888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.808921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.809156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.809188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.809552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.809582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.809928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.809958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.810318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.810348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.810681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.810713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.811085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.811115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.811465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.811496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.811830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.811861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.812227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.812257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.812625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.812654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.812995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.997 [2024-11-05 16:59:49.813026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.997 qpair failed and we were unable to recover it. 00:35:42.997 [2024-11-05 16:59:49.813392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.813422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.813776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.813807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.814149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.814180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.814526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.814556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.814901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.814932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.815267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.815296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.815653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.815683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.816048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.816080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.816408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.816437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.816765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.816795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.817163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.817193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.817537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.817566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.817925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.817955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.818301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.818330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.818672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.818701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.819064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.819094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.819450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.819479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.819826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.819856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.820234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.820264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.820607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.820636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.820981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.821017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.821361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.821392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.821755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.821785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.822127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.822156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.822513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.822543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.822888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.822919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.823272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.823302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.823668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.823699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.824067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.824097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.824437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.824467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.824819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.824849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.825221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.825251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.825480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.825510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.825857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.998 [2024-11-05 16:59:49.825887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.998 qpair failed and we were unable to recover it. 00:35:42.998 [2024-11-05 16:59:49.826252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.826283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.826633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.826662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.827086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.827118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.827460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.827490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.827826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.827857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.828211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.828240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.828558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.828587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.828916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.828945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.829297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.829326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.829554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.829586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.829918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.829949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.830308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.830337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.830682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.830711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.831087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.831118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.831465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.831495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.831812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.831842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.832207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.832236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.832613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.832644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.832967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.832998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.833343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.833373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.833725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.833774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.834143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.834488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.834518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.834864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.834895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.835240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.835271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.835610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.835640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.835986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.836024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.836259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.836289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.836597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.836629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.836884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.836915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.837233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.837264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.837687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.837717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:42.999 [2024-11-05 16:59:49.838078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.999 [2024-11-05 16:59:49.838108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:42.999 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.838463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.838493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.838823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.838854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.839178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.839208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.839560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.839589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.839916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.839946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.840305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.840335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.840642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.840671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.841011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.841041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.841392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.841421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.841784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.841816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.842159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.842189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.842549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.842578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.842897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.842928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.843272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.843301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.843666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.843695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.844032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.844062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.844410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.844440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.844837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.845181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.845211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.845546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.845575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.845913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.845944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.846277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.846308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.846657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.846686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.847033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.847063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.847414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.847444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.847784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.847814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.848165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.848194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.848543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.848573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.848928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.848959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.849282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.849311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.849540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.849568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.849814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.849844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.850220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.850249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.850604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.850633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.850965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.850996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.851352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.000 [2024-11-05 16:59:49.851381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.000 qpair failed and we were unable to recover it. 00:35:43.000 [2024-11-05 16:59:49.851731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.851767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.852118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.852149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.852501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.852531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.852879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.852909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.853244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.853273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.853642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.853673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.854031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.854062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.854403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.854433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.854675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.854707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.855073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.855104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.855479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.855509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.855860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.855891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.856260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.856290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.856627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.856657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.857009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.857040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.857404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.857434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.857769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.857799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.858187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.858216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.858563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.858592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.858804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.858835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.859187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.859217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.859573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.859603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.859934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.859964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.860332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.860362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.860716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.860762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.861110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.861139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.861499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.861529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.861843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.861876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.862111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.862140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.862504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.862533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.862863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.862894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.863248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.863277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.863630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.863660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.863999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.864030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.864378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.864408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.001 [2024-11-05 16:59:49.864768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.001 [2024-11-05 16:59:49.864799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.001 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.865119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.865149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.865494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.865874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.865906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.866269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.866299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.866629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.866658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.867021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.867051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.867326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.867355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.867680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.867710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.868151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.868181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.868568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.868598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.868907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.868938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.869257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.869287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.869632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.869662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.869989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.870021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.870369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.870399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.870640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.870673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.871027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.871420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.871450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.871803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.871833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.872176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.872208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.872559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.872590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.872973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.873003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.873202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.873232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.873564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.873594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.873937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.873967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.874327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.874357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.874675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.874706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.875074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.875105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.875462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.875502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.875847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.875878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.876284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.876313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.876690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.876719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.877058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.877089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.877446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.877475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.877604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.877636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.002 qpair failed and we were unable to recover it. 00:35:43.002 [2024-11-05 16:59:49.877904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.002 [2024-11-05 16:59:49.877934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.878331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.878360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.878737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.878776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.879095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.879125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.879521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.879550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.879888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.879919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.880270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.880300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.880658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.880687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.880917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.880947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.881162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.881194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.881507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.881537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.881878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.881909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.882273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.882304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.882634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.882664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.882995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.883024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.883386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.883418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.883772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.883802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.884145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.884174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.884526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.884555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.884922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.884952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.885282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.885313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.885638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.885668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.886064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.886096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.886491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.886521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.886865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.886897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.887227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.887258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.887615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.887644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.887992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.888022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.888383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.888413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.888642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.888670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.889046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.889076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.889391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.889421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.889784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.889815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.890155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.890191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.890542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.890573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.890901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.890932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.891262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.891292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.003 qpair failed and we were unable to recover it. 00:35:43.003 [2024-11-05 16:59:49.891610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.003 [2024-11-05 16:59:49.891640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.892008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.892038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.892367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.892396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.892770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.892801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.893143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.893173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.893483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.893513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.893866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.893897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.894284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.894315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.894536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.894567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.894801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.894835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.895204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.895235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.895559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.895589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.895921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.895951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.896308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.896338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.896653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.896682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.897021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.897051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.897395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.897426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.897780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.897810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.898127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.898159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.898505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.898537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.898883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.898913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.899294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.899323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.899668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.899697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.900067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.900097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.900436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.900467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.900820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.900850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.901231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.901261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.901498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.004 [2024-11-05 16:59:49.901530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.004 qpair failed and we were unable to recover it. 00:35:43.004 [2024-11-05 16:59:49.901862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.901893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.902254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.902284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.902626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.902655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.903004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.903034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.903373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.903404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.903763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.903794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.904116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.904155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.904511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.904564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.904971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.905034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.905461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.905510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.905928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.905980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.906381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.906430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.906721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.906785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.907137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.907188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.907570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.907606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.907964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.908017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.908405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.908452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.908839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.908894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.909167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.909221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.909609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.909656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.910068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.910120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.910510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.910560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.911020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.911435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.911484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.911764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.911817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.912177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.912227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.912653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.912704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.913112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.913162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.913559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.913975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.914028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.914399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.914447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.914919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.915291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.915339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.915717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.915774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.916186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.916234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.916572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.916622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.916997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.005 [2024-11-05 16:59:49.917034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.005 qpair failed and we were unable to recover it. 00:35:43.005 [2024-11-05 16:59:49.917391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.917421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.917762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.917792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.918096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.918127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.918465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.918508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.918917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.918969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.919313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.919360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.919777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.919831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.920200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.920241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.920601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.920646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.921017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.921059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.921411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.921453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.921832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.921879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.922217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.922259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.922673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.922712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.923052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.923094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.923478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.923518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.923926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.923973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.924339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.924380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.924776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.924820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.925194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.925235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.925585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.925627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.925872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.925916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.926296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.926338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.926712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.926762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.927119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.927148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.927529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.927572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.927968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.928007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.928375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.928416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.928727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.928778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.929165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.929205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.929577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.929617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.929978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.930020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.930379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.930408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.930777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.930818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.931211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.931242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.006 [2024-11-05 16:59:49.931609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.006 [2024-11-05 16:59:49.931651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.006 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.932063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.932308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.932346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.932762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.932804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.933060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.933100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.933446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.933485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.933848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.933889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.934241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.934281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.934657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.934701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.935081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.935123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.935464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.935497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.935878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.935903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.936242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.936275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.936618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.936650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.936982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.937014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.937363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.937397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.937757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.937798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.938123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.938155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.938491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.938524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.938900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.938934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.939281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.939313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.939656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.939689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.940043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.940077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.940453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.940820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.940854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.941210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.941244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.941552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.941584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.941957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.941992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.942350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.942383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.942738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.942781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.943130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.943163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.943471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.943501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.007 qpair failed and we were unable to recover it. 00:35:43.007 [2024-11-05 16:59:49.943710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.007 [2024-11-05 16:59:49.943743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.944108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.944141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.944458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.944491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.944848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.944880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.945161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.945194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.945566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.945599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.945953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.945985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.946331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.946365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.946688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.946719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.947077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.947110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.947457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.947490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.947755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.947788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.948121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.948153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.948468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.948490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.948845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.948869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.949050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.949073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.949274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.949290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.949596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.949609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.949962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.949976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.950153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.950169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.950366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.950379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.950696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.950714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.950934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.950956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.951274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.951296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.951551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.951583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.951944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.951968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.952327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.952350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.952680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.952702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.952977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.953000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.953406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.953430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.953789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.953812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.954005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.954246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.954483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.954505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.954729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.008 [2024-11-05 16:59:49.954756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.008 qpair failed and we were unable to recover it. 00:35:43.008 [2024-11-05 16:59:49.955090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.955106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.955481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.955495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.955829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.955844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.956132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.956145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.956354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.956366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.956679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.956700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.956910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.956932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.957231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.957254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.957578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.957600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.957919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.957943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.958276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.958298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.958481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.958502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.958846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.958870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.959227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.959249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.959574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.959596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.959923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.959944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.960258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.960281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.960623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.960643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.960950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.960972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.961321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.961341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.961660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.961681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.961999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.962020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.962330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.962351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.962642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.962662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.962952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.962965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.963164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.963177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.963386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.963398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.963742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.963762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.964082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.964096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.964414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.964439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.964665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.964685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.965005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.009 [2024-11-05 16:59:49.965025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.009 qpair failed and we were unable to recover it. 00:35:43.009 [2024-11-05 16:59:49.965370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.965389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.965651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.965670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.966032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.966053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.966345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.966365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.966678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.966698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.967037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.967058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.967393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.967414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.967741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.967767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.967959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.968285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.968306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.968506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.968525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.968933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.968954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.969159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.969179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.969505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.969524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.969843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.969863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.970197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.970217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.970555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.970574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.970864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.970883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.971246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.971265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.971461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.971480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.971807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.971827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.972020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.972038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.972361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.972380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.972694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.972713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.973067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.973089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.973407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.973427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.973738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.973772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.974110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.974441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.974462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.974795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.974814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.975124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.975145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.975484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.975502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.975825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.975844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.976181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.976201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.976515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.976755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.976773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.977112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.010 [2024-11-05 16:59:49.977132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.010 qpair failed and we were unable to recover it. 00:35:43.010 [2024-11-05 16:59:49.977462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.977486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.977788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.977806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.978002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.978022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.978342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.978361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.978608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.978625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.978884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.978905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.979247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.979266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.979341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.979358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.979663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.979682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.979999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.980020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.980348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.980365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.980689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.980709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.981025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.981045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.981367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.981386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.981672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.981691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.982034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.982055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.982382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.982402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.982701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.982721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.983013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.983032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.983379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.983399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.983719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.983739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.983961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.983980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.984297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.984311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.984653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.984665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.984951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.984963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.985269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.985281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.985583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.985595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.985922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.985940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.986131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.986149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.986496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.986517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.986886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.986906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.987244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.987264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.987592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.987611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 [2024-11-05 16:59:49.987716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.011 [2024-11-05 16:59:49.987732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc0000b90 with addr=10.0.0.2, port=4420 00:35:43.011 qpair failed and we were unable to recover it. 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.011 starting I/O failed 00:35:43.011 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Read completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 Write completed with error (sct=0, sc=8) 00:35:43.012 starting I/O failed 00:35:43.012 [2024-11-05 16:59:49.987963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:43.012 [2024-11-05 16:59:49.988424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.988466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.988950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.988986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.989316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.989624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.989634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.989967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.990002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.990377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.990387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.990721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.990730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.991021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.991055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.991374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.991386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.991697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.991707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.992307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.992315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.992619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.992628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.992814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.992823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.993159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.993169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.993557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.993566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.993823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.993832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.994148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.994158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.994486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.994494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.994816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.994825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.995134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.995143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.995448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.995457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.995781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.995790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.996126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.996134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.996462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.996470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.996786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.012 [2024-11-05 16:59:49.996795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.012 qpair failed and we were unable to recover it. 00:35:43.012 [2024-11-05 16:59:49.997107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.997121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.997425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.997736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.997745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.997932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.997941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.998248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.998256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.998421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.998430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.998730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.998739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.998908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.998917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.999234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.999243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.999531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.999539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:49.999836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:49.999845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.000225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.000234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.000529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.000538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.000817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.000826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.001122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.001456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.001466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.001784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.001793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.002033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.002766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.002780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.003187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.003197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.003577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.003587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.003918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.003927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.004242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.004252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.004578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.004588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.004894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.004903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.005127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.005135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.005427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.005436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.005761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.005770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.006079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.006088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.006384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.006393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.006685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.006694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.007012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.007021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.013 [2024-11-05 16:59:50.007331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.013 [2024-11-05 16:59:50.007341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.013 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.007634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.007643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.008031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.008040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.008295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.008303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.008614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.008934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.008943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.009245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.009253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.009564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.009573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.009960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.009971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.010186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.010194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.010512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.010521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.010865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.010875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.011209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.011218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.011545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.011555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.011861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.011870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.012195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.012204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.012527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.012535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.012820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.012828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.013172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.013180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.013480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.013488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.013798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.013808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.014194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.014204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.014957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.014978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.015184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.015194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.015520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.015529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.015821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.015831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.016158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.016169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.016478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.016488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.016763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.016773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.017088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.017097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.017392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.017402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.017715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.018020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.018030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.014 [2024-11-05 16:59:50.018361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.014 [2024-11-05 16:59:50.018371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.014 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.018679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.018688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.018925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.018935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.019266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.019276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.019595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.019604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.019756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.019765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.020003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.020011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.020366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.020376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.020687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.020697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.021007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.021017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.021328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.021337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.021649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.021658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.021947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.021957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.022306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.022315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.022625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.022635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.022920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.022933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.023253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.023263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.023578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.023588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.023901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.023911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.024222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.024231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.024563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.024573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.024870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.024880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.025165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.025174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.025506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.025733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.025743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.026051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.026061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.026375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.026384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.026695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.026704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.027041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.027051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.027360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.027369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.027558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.027567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.027840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.027849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.028178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.028188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.028514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.028523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.028715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.015 [2024-11-05 16:59:50.028724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.015 qpair failed and we were unable to recover it. 00:35:43.015 [2024-11-05 16:59:50.029014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.029024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.029336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.029346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.029646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.029654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.029969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.029978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.030309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.030318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.030494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.030504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.030754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.030765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.031123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.031133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.031442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.031451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.031763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.031773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.032079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.032089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.032410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.032422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.032737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.032748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.033049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.033059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.033352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.033360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.033671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.033680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.033987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.033997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.034307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.034316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.034503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.034512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.034790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.034800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.035131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.035148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.035460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.035469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.035772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.035781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.036157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.036165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.036482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.036491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.036825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.036837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.037151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.037161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.037453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.037461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.037786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.038098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.038108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.038297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.038307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.038632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.038641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.038954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.038962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.039251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.039260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.039568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.039578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.016 [2024-11-05 16:59:50.039902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.016 [2024-11-05 16:59:50.039910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.016 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.040285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.040293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.040591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.040598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.040912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.040922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.041222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.041230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.041548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.041895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.041905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.042232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.042240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.042431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.042439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.042703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.042712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.042916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.042927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.043186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.043195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.043452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.043462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.043769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.043778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.044060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.044068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.044419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.044427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.017 [2024-11-05 16:59:50.044731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.017 [2024-11-05 16:59:50.044741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.017 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.045056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.045066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.045362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.045372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.045681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.045691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.045868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.045878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.046172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.046180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.046358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.046367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.046696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.046705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.047018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.047027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.047214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.047225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.047446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.293 [2024-11-05 16:59:50.047456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.293 qpair failed and we were unable to recover it. 00:35:43.293 [2024-11-05 16:59:50.047747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.047759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.048123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.048131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.048461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.048470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.048788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.048798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.049150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.049158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.049472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.049480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.049727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.049736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.050089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.050097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.050308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.050316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.050615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.050626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.050928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.050937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.051247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.051256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.051446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.051454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.051773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.051782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.052036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.052044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.052427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.052436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.052739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.052751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.053050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.053061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.053391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.053402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.053695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.053997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.054006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.054314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.054322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.054631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.054641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.054977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.054985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.055306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.055316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.055682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.055692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.055994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.056003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.056298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.056306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.056617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.056627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.056913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.056922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.057234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.057243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.057432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.057441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.057763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.057772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.058085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.058094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.058409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.058418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.058710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.058719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.058925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.058934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.294 [2024-11-05 16:59:50.059235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.294 [2024-11-05 16:59:50.059244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.294 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.059485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.059494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.059822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.059832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.060105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.060113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.060420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.060428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.060743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.060757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.061048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.061057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.061401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.061409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.061580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.061590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.061922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.061931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.062259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.062268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.062582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.062590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.062906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.062914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.063229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.063239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.063559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.063567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.063963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.063973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.064276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.064285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.064597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.064606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.064936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.065244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.065253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.065557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.065565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.065869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.065878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.066181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.066189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.066497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.066507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.066816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.066825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.067137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.067146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.067461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.067469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.067783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.067793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.068175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.068184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.068372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.068381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.068676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.068685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.068966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.068975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.069304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.069314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.069625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.069635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.069916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.069925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.070247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.070257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.070560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.070568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.070875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.295 [2024-11-05 16:59:50.070884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.295 qpair failed and we were unable to recover it. 00:35:43.295 [2024-11-05 16:59:50.071194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.071202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.071534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.071543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.071844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.071853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.072160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.072170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.072501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.072510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.072820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.072830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.073183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.073192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.073539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.073547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.073885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.073895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.074205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.074213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.074537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.074855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.074864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.075151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.075159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.075322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.075331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.075643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.075651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.075857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.075866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.076131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.076139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.076357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.076366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.076639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.076646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.076943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.076951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.077243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.077251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.077552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.077560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.077876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.077884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.078211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.078220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.078513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.078521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.078823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.078832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.079147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.079156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.079461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.079470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.079797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.079806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.079999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.080007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.080270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.080279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.080601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.080610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.080937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.080946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.081124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.081133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.081435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.081443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.081744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.081757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.082053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.082062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.082366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.296 [2024-11-05 16:59:50.082376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.296 qpair failed and we were unable to recover it. 00:35:43.296 [2024-11-05 16:59:50.082555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.082564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.082850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.082858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.083189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.083198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.083458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.083467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.083784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.083793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.084087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.084097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.084385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.084393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.084701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.084709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.084884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.084893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.085193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.085201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.085511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.085520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.085833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.085842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.086022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.086031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.086312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.086320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.086545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.086553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.086778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.086787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.087072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.087080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.087361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.087369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.087658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.087667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.087855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.087863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.088184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.088193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.088483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.088491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.088799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.088807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.088963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.088971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.089238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.089246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.089414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.089422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.089778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.089788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.090053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.090061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.090359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.090369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.090672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.090681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.090992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.091002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.091306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.091314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.091620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.091630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.091925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.091933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.092236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.092246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.092556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.092564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.092875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.092883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.093206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.297 [2024-11-05 16:59:50.093214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.297 qpair failed and we were unable to recover it. 00:35:43.297 [2024-11-05 16:59:50.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.093504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.093813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.093822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.094194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.094203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.094505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.094513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.094722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.094731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.095033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.095042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.095350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.095359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.095667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.095679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.095981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.095990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.096272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.096280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.096577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.096586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.096875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.096884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.097189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.097198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.097505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.097513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.097826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.097834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.098095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.098103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.098394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.098403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.098708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.098716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.098999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.099009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.099313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.099322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.099627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.099635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.099920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.100237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.100245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.100439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.100447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.100749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.100759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.101073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.101082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.101389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.101398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.101704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.101713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.101994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.102002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.102242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.102250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.102601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.102610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.102921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.298 [2024-11-05 16:59:50.102930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.298 qpair failed and we were unable to recover it. 00:35:43.298 [2024-11-05 16:59:50.103329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.103337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.103683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.103692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.103997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.104007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.104324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.104612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.104621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.104916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.104925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.105132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.105140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.105453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.105461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.105743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.105765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.106205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.106213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.106533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.106542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.106874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.106883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.107091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.107099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.107385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.107393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.107713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.107721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.108089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.108100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.108405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.108593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.108602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.108944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.108952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.109263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.109272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.109546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.109555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.109862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.109871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.110178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.110189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.110503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.110512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.110706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.110715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.111015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.111329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.111337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.111648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.111656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.111967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.111977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.112276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.112285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.112596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.112605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.112910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.112918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.113236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.113246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.113551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.113559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.113864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.113872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.114182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.114191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.299 [2024-11-05 16:59:50.114487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.299 [2024-11-05 16:59:50.114497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.299 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.114809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.114818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.115141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.115151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.115460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.115469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.115753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.115761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.116080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.116088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.116395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.116405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.116717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.116726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.117051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.117061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.117365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.117373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.117683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.117692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.118000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.118008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.118335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.118344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.118664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.118674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.119070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.119080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.119386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.119395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.119697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.119706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.120017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.120026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.120335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.120345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.120652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.120664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.120839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.120848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.121177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.121187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.121498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.121507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.121813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.121821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.122108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.122117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.122386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.122394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.122606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.122614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.122923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.122932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.123236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.123244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.123482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.123490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.123797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.123806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.124127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.124136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.124436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.124445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.124751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.124759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.125028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.125036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.125341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.125349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.125638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.125646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.125961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.125970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.300 [2024-11-05 16:59:50.126276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.300 [2024-11-05 16:59:50.126285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.300 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.126595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.126603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.126928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.126938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.127241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.127250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.127553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.127563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.127868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.127877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.128185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.128194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.128500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.128509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.128820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.128829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.129187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.129196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.129521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.129531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.129837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.129846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.130162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.130171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.130481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.130490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.130794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.130803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.131110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.131118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.131463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.131472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.131773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.131782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.132070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.132078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.132405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.132413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.132691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.132700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.133007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.133017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.133320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.133624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.133632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.133911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.133919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.134289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.134297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.134524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.134532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.134865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.134874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.135196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.135206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.135517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.135526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.135809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.135818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.136130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.136139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.136479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.136488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.136783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.136792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.137068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.137075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.137388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.137397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.137704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.137713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.138052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.138061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.138362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.301 [2024-11-05 16:59:50.138371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.301 qpair failed and we were unable to recover it. 00:35:43.301 [2024-11-05 16:59:50.138676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.138686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.138995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.139004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.139304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.139313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.139612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.139621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.139910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.139920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.140240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.140248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.140553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.140563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.140872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.140881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.141204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.141215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.141520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.141531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.141877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.141885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.142184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.142193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.142496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.142504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.142809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.142818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.143135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.143143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.143426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.143434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.143740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.143753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.144084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.144093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.144470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.144479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.144777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.144786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.145000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.145009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.145336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.145345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.145661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.145669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.145856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.145865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.146194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.146203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.146510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.146520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.146809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.146817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.147175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.147184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.147491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.147499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.147806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.147814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.148132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.148142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.148438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.148447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.148752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.148762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.149071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.149079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.302 qpair failed and we were unable to recover it. 00:35:43.302 [2024-11-05 16:59:50.149394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.302 [2024-11-05 16:59:50.149403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.149694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.149703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.149992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.150291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.150299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.150604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.150613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.150942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.150950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.151261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.151272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.151579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.151587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.151894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.151902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.152209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.152217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.152514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.152523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.152812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.152820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.153145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.153154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.153444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.153452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.153761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.153769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.154113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.154123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.154395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.154403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.154595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.154603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.154872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.154880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.155210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.155523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.155532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.155821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.155829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.156112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.156121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.156433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.156443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.156788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.156797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.157104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.157114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.157423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.157431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.157754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.157764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.158067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.158076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.158372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.158380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.158651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.158659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.158962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.158971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.159278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.159287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.159616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.159626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.159913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.159922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.160227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.160532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.160542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.160884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.303 [2024-11-05 16:59:50.160892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.303 qpair failed and we were unable to recover it. 00:35:43.303 [2024-11-05 16:59:50.161203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.161211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.161520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.161528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.161865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.161874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.162204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.162213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.162518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.162528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.162807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.162815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.163033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.163041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.163347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.163356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.163693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.163701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.164010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.164019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.164327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.164336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.164527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.164537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.164811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.164819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.165144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.165153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.165464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.165472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.165834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.165842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.166149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.166157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.166436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.166446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.166756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.166765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.167042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.167050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.167322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.167604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.167613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.167914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.167923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.168287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.168295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.168518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.168526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.168778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.168785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.169150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.169160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.169369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.169377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.169675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.169684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.170000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.170009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.170340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.170348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.170688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.170696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.170925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.170934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.171305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.171314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.171504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.171513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.171806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.171815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.172139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.172147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.172339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.172348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.304 qpair failed and we were unable to recover it. 00:35:43.304 [2024-11-05 16:59:50.172458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.304 [2024-11-05 16:59:50.172466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.172758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.172766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.173039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.173047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.173346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.173355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.173660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.173669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.173964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.173972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.174283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.174292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.174610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.174619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.174913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.174921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.175210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.175218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.175559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.175567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.175874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.175882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.176183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.176192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.176514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.176522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.176823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.176832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.177152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.177160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.177474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.177483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.177657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.177666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.177937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.177945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.178115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.178124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.178207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.178214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.178373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.178382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.178652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.178660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.178969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.178977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.179277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.179286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.179638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.179647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.179846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.179854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.180066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.180074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.180359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.180367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.180668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.180677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.180985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.180994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.181301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.181310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.181617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.181626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.181936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.181946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.182135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.182144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.182455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.182463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.182821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.182829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.183093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.183101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.305 [2024-11-05 16:59:50.183436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.305 [2024-11-05 16:59:50.183444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.305 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.183730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.183738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.184037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.184046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.184417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.184425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.184742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.184754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.185123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.185132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.185446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.185455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.185757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.185766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.186045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.186053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.186376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.186385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.186687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.186694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.186864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.186872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.187169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.187178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.187442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.187451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.187780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.187788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.187942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.187950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.188249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.188258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.188543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.188552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.188885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.188894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.189073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.189081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.189421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.189430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.189735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.189749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.190045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.190053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.190212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.190221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.190429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.190437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.190615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.190624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.190890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.191186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.191195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.191471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.191479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.191679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.191687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.192029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.192037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.192332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.192340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.192629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.192637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.192984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.192993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.193325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.193333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.193701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.193709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.194015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.194023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.194195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.194204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.194516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.306 [2024-11-05 16:59:50.194524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.306 qpair failed and we were unable to recover it. 00:35:43.306 [2024-11-05 16:59:50.194813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.194822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.195122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.195132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.195443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.195753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.195763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.196072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.196081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.196248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.196256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.196531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.196539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.196858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.196867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.197198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.197207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.197433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.197441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.197750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.197759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.198055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.198064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.198333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.198341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.198664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.198672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.198955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.198964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.199251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.199259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.199555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.199564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.199904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.199913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.200221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.200229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.200592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.200600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.200896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.200904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.201242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.201250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.201558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.201569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.201898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.201906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.202234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.202243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.202568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.202577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.202753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.202762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.203069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.203078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.203415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.203424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.203643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.203652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.203947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.203955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.204257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.204266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.204583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.204591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.204893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.204901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.205090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.307 [2024-11-05 16:59:50.205099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.307 qpair failed and we were unable to recover it. 00:35:43.307 [2024-11-05 16:59:50.205399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.205407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.205675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.205683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.206010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.206018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.206288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.206296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.206626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.206634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.206935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.206943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.207222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.207230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.207398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.207405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.207724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.207733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.207905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.207915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.208204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.208213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.208488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.208496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.208788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.208796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.209129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.209138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.209465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.209473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.209663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.209670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.209985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.209993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.210289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.210296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.210606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.210615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.210914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.210922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.211248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.211257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.211551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.211560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.211839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.211847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.212179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.212187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.212495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.212504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.212674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.212684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.212980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.212988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.213308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.213318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.213633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.213642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.213948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.213957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.214289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.214298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.214605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.214613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.214777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.214785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.215073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.215082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.215405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.215414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.215720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.215729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.216012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.216021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.216314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.308 [2024-11-05 16:59:50.216322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.308 qpair failed and we were unable to recover it. 00:35:43.308 [2024-11-05 16:59:50.216648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.216657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.216998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.217007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.217318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.217326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.217656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.217665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.217975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.217985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.218328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.218337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.218636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.218645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.218953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.218962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.219257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.219266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.219572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.219581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.219888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.219898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.220199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.220208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.220505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.220514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.220790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.220799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.221121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.221129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.221426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.221434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.221776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.221784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.222091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.222100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.222404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.222412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.222697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.222705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.223013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.223021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.223335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.223343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.223666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.223675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.223992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.224001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.224310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.224318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.224591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.224888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.224896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.225204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.225213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.225539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.225547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.225854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.226076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.226084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.226392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.226401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.226708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.226716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.227031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.227040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.227352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.227360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.227656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.227969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.228358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.228367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.309 [2024-11-05 16:59:50.228659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.309 [2024-11-05 16:59:50.228668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.309 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.228963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.228972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.229240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.229248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.229571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.229580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.229879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.229888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.230215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.230224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.230529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.230538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.230767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.231039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.231048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.231368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.231377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.231691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.231700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.231982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.231990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.232315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.232323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.232611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.232619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.232811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.232819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.233035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.233412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.233420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.233716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.233724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.234002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.234010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.234310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.234326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.234686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.234694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.234986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.234995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.235264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.235272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.235612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.235621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.235915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.235924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.236250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.236258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.236576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.236585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.236772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.236781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.237082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.237090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.237386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.237395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.237556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.237565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.237820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.237830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.238142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.238150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.238321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.238329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.238597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.238605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.238779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.238786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.239074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.239083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.239361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.239369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.310 [2024-11-05 16:59:50.239563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.310 [2024-11-05 16:59:50.239570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.310 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.239856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.239864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.240190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.240199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.240475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.240483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.240763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.241073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.241381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.241390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.241681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.241996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.242005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.242351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.242358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.242640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.242648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.242914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.242922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.243248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.243256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.243558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.243566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.243742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.243754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.244045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.244053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.244245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.244253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.244532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.244541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.244843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.244852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.245192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.245200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.245498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.245506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.245807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.245815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.246151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.246159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.246483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.246492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.246797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.246806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.247193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.247202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.247535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.247544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.247703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.247712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.248002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.248010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.248316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.248324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.248650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.248658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.248976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.248984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.249343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.249351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.249647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.249657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.249958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.249966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.311 [2024-11-05 16:59:50.250306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.311 [2024-11-05 16:59:50.250314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.311 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.250566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.250574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.250906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.250914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.251229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.251238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.251542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.251550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.251857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.251865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.252165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.252173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.252478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.252486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.252659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.252666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.252884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.252892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.253088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.253096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.253352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.253360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.253696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.253704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.254010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.254019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.254323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.254331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.254614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.254621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.254912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.254920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.255237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.255246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.255583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.255591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.255883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.255891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.256225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.256233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.256540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.256549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.256859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.256867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.257212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.257220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.257544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.257552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.257755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.257763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.258045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.258053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.258426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.258435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.258716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.258725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.259031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.259040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.259345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.259353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.259702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.259710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.259932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.259941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.260341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.260350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.260626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.260635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.260914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.260923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.261228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.261237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.261429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.261438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.312 [2024-11-05 16:59:50.261706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.312 [2024-11-05 16:59:50.261717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.312 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.262036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.262046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.262347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.262357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.262667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.262675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.262993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.263002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.263305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.263314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.263677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.263979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.263987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.264261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.264269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.264561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.264569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.264868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.264877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.265179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.265188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.265492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.265501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.265809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.265819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.266153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.266161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.266456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.266465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.266773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.266781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.266997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.267005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.267109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.267117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.267474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.267481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.267801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.267810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.268134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.268142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.268469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.268478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.268744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.268754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.269028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.269036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.269195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.269205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.269484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.269492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.269782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.269791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.270106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.270114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.270416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.270424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.270729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.270736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.271038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.271047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.271200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.271209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.271515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.271523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.271796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.271804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.272101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.272109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.272405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.272413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.272767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.313 [2024-11-05 16:59:50.272776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.313 qpair failed and we were unable to recover it. 00:35:43.313 [2024-11-05 16:59:50.272980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.272988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.273251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.273259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.273534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.273544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.273843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.273851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.274174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.274182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.274474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.274482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.274770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.274778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.275084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.275092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.275394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.275404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.275706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.275714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.276002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.276019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.276302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.276310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.276622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.276631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.276921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.276929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.277256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.277265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.277573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.277581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.277892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.277901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.278200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.278208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.278508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.278516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.278816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.278824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.279151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.279160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.279437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.279445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.279731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.279739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.280033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.280354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.280657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.280665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.280960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.280968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.281276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.281285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.281578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.281586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.281908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.281917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.282232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.282240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.282439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.282447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.282728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.282736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.283080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.283087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.283412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.283421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.283723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.283731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.283908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.283915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.314 [2024-11-05 16:59:50.284201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.314 [2024-11-05 16:59:50.284209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.314 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.284497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.284505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.284812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.284820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.285117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.285125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.285428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.285436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.285729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.285740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.286086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.286094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.286416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.286425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.286733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.286740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.287048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.287056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.287360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.287367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.287674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.287683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.287991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.288000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.288333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.288342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.288643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.288651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.288966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.288974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.289268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.289276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.289600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.289608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.289910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.289919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.290228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.290236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.290543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.290551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.290864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.291189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.291197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.291518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.291527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.291832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.291841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.292167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.292176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.292489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.292497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.292808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.292817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.293107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.293115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.293415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.293423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.293769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.293777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.294107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.294116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.294417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.294427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.294717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.294734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.295048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.295056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.295325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.295333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.295654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.295662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.295964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.295973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.296284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.296293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.315 [2024-11-05 16:59:50.296600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.315 [2024-11-05 16:59:50.296608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.315 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.296916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.296924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.297233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.297242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.297550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.297862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.297870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.298218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.298226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.298525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.298533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.298891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.298899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.299223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.299232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.299542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.299550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.299841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.299849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.300152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.300160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.300506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.300515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.300807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.300815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.301127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.301136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.301448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.301456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.301772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.301781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.302043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.302051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.302368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.302709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.302716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.303012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.303021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.303327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.303334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.303657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.303666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.303988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.303997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.304306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.304315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.304620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.304629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.304918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.304927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.305233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.305242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.305544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.305553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.305870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.305877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.306216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.306224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.306529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.306537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.306857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.306867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.307229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.307239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.307560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.307569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.307918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.316 [2024-11-05 16:59:50.307927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.316 qpair failed and we were unable to recover it. 00:35:43.316 [2024-11-05 16:59:50.308231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.308240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.308546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.308554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.308881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.308890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.309061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.309069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.309329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.309336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.309657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.309665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.309976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.309986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.310281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.310288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.310595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.310604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.310911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.310919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.311246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.311559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.311566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.311947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.311956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.312251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.312259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.312568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.312576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.312908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.312917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.313218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.313225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.313433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.313441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.313743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.313760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.314045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.314054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.314205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.314214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.314489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.314497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.314803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.314811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.315137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.315145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.315330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.315339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.315650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.315658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.315981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.315990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.316273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.316281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.316601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.316610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.316857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.316865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.317188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.317197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.317569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.317577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.317873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.317882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.318220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.318228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.318389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.318397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.318713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.318720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.318939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.318948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.319233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.319242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.319546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.319555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.319853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.319862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.320180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.317 [2024-11-05 16:59:50.320189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.317 qpair failed and we were unable to recover it. 00:35:43.317 [2024-11-05 16:59:50.320507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.320515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.320825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.320834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.321021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.321029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.321293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.321300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.321619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.321627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.321968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.321976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.322278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.322286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.322596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.322605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.322931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.322939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.323244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.323252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.323578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.323586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.323901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.323910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.324231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.324239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.324599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.324607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.324904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.324912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.325210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.325218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.325527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.325536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.325838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.325847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.326163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.326173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.326476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.326484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.326794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.326803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.327122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.327130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.327478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.327486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.327796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.327804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.327974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.327983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.328245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.328253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.328545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.328553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.328888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.328897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.329163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.329171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.329475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.329484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.329772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.329781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.330095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.330103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.330439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.330447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.330743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.330754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.331068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.331075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.331235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.331243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.331519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.331529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.331794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.331803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.332125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.332134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.332440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.332448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.332753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.332761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.333064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.333072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.333375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.333384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.318 [2024-11-05 16:59:50.333685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.318 [2024-11-05 16:59:50.333693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.318 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.333870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.333878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.334209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.334217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.334522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.334531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.334858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.334867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.335182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.335191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.335499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.335507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.335821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.335831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.336133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.336141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.336443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.336452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.336756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.336764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.337071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.337079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.337390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.337398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.337697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.337705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.338020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.338030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.338344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.338352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.338664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.338671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.338978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.338987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.339293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.339303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.339588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.339596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.339903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.339911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.340216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.340223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.340390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.340399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.340606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.340613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.340966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.340974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.341173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.341181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.319 [2024-11-05 16:59:50.341477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.319 [2024-11-05 16:59:50.341484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.319 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.341773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.342096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.342106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.342414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.342423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.342610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.342617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.342930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.342938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.343242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.343251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.343560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.343569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.343879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.343887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.344192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.344201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.344498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.344508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.344815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.344823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.345134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.345143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.345468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.345476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.345787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.345796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.346108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.346116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.346416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.346424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.346710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.346719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.346934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.346943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.347237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.347245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.347589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.347597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.347897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.347905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.348175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.348183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.348492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.348501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.348818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.348826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.349109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.349117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.349445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.349752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.349760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.350064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.350073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.350358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.350366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.350726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.350735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.351037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.351045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.351363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.351372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.351697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.351705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.351911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.596 [2024-11-05 16:59:50.351920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.596 qpair failed and we were unable to recover it. 00:35:43.596 [2024-11-05 16:59:50.352228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.352237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.352521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.352530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.352854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.352862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.353238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.353246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.353541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.353550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.353857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.353865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.354185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.354194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.354497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.354505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.354809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.354819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.355135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.355144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.355431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.355440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.355785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.355793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.356114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.356125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.356397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.356406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.356728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.356737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.357075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.357083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.357388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.357397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.357703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.357711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.358036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.358045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.358365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.358602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.358910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.358918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.359217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.359226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.359540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.359547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.359755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.359763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.360074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.360414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.360422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.360733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.360741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.361012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.361020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.361328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.361336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.361622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.361629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.361902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.361911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.362216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.362224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.362531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.362538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.362828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.362836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.363186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.363194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.363494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.363502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.363804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.363813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.597 qpair failed and we were unable to recover it. 00:35:43.597 [2024-11-05 16:59:50.364104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.597 [2024-11-05 16:59:50.364112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.364415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.364423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.364734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.364743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.365068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.365076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.365443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.365451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.365745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.365755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.366027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.366346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.366355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.366678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.366687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.366968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.366976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.367282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.367290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.367632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.367641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.367969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.367978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.368278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.368287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.368590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.368601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.368877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.368886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.369177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.369186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.369529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.369537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.369835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.369843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.370168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.370176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.370496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.370505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.370701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.370710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.371036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.371044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.371347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.371356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.371645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.371653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.371994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.372002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.372307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.372316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.372608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.372616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.372910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.372919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.373265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.373274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.373483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.373492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.373671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.373680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.373984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.373993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.374291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.374299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.374593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.374602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.374915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.374924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.375257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.375266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.375570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.375578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.598 qpair failed and we were unable to recover it. 00:35:43.598 [2024-11-05 16:59:50.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.598 [2024-11-05 16:59:50.375894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.376198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.376206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.376493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.376501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.376812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.376822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.377111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.377120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.377421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.377430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.377725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.377733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.378025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.378034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.378360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.378369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.378679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.378688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.379023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.379031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.379213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.379221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.379374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.379381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.379594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.379601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.379897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.379905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.380208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.380216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.380498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.380508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.380805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.380813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.380975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.380982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.381281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.381289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.381470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.381478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.381779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.381788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.382080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.382088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.382297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.382304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.382562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.382570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.382847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.382855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.383108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.383117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.383313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.383321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.383604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.383613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.383902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.383911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.384240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.384248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.384552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.384561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.384865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.384874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.385226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.385235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.385541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.385549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.385856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.385864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.386172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.386180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.386368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.599 [2024-11-05 16:59:50.386376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.599 qpair failed and we were unable to recover it. 00:35:43.599 [2024-11-05 16:59:50.386685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.386694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.386990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.386999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.387345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.387353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.387669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.387677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.387971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.387980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.388283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.388292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.388595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.388604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.388915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.388923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.389299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.389307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.389613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.389622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.390027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.390035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.390334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.390343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.390666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.390674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.390975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.390984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.391288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.391296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.391609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.391618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.391940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.391948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.392134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.392143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.392420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.392430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.392748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.392756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.392959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.392968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.393283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.393292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.393600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.393608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.393912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.393920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.394214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.394222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.394513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.394522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.394808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.394816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.395122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.395130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.395494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.395502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.395797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.395805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.396127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.396135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.396404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.396412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.396702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.600 [2024-11-05 16:59:50.396710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.600 qpair failed and we were unable to recover it. 00:35:43.600 [2024-11-05 16:59:50.396984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.396993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.397204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.397213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.397528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.397537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.397825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.397833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.398144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.398153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.398446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.398455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.398772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.398781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.399119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.399127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.399429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.399438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.399757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.399766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.400065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.400073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.400397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.400405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.400730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.400738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.401040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.401058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.401377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.401385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.401681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.401689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.401990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.402320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.402329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.402636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.402644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.402966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.402975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.403274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.403282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.403585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.403594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.403902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.403911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.404210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.404219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.404508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.404517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.404821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.404832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.405142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.405150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.405483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.405491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.405801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.405809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.406112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.406120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.406430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.406438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.406747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.406756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.407052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.407061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.407371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.407379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.407667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.407676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.408004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.408013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.408190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.408200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.408510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.408519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.601 qpair failed and we were unable to recover it. 00:35:43.601 [2024-11-05 16:59:50.408709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.601 [2024-11-05 16:59:50.408718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.409032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.409042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.409357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.409366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.409672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.409681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.409976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.409985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.410274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.410282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.410621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.410918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.410927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.411190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.411198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.411508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.411516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.411851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.411860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.412172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.412180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.412486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.412495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.412821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.412830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.413139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.413150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.413429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.413438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.413657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.413666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.413950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.413958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.414260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.414269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.414576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.414585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.414900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.414909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.415236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.415244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.415542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.415550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.415858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.415866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.416203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.416212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.416518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.416527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.416809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.416819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.417137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.417148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.417450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.417459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.417760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.418051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.418060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.418338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.418346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.418646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.418655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.418958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.418968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.419303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.419311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.419647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.419655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.419912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.419920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.420252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.420261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.602 [2024-11-05 16:59:50.420434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.602 [2024-11-05 16:59:50.420443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.602 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.420756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.420765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.421053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.421061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.421273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.421281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.421575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.421584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.421787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.421796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.422064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.422073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.422427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.422436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.422762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.422772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.423128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.423136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.423427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.423435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.423691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.423699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.424012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.424022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.424331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.424340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.424646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.424655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.424843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.425186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.425194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.425484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.425492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.425791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.425800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.426116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.426124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.426424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.426749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.426757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.427063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.427071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.427385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.427394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.427712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.427721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.427967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.427976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.428291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.428299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.428599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.428608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.428937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.428946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.429229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.429239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.429546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.429554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.429880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.429888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.430150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.430158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.430480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.430816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.430824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.431121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.431129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.431503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.431512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.431803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.431811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.432140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.603 [2024-11-05 16:59:50.432149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.603 qpair failed and we were unable to recover it. 00:35:43.603 [2024-11-05 16:59:50.432480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.432489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.432850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.432859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.433181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.433189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.433521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.433530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.433840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.433849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.434166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.434173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.434480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.434488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.434796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.434804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.435198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.435206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.435501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.435509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.435812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.435820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.436136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.436146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.436483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.436491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.436687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.436694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.437000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.437008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.437204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.437212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.437516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.437524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.437824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.437833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.438142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.438150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.438298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.438306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.438584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.438592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.438777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.438785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.439092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.439101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.439270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.439279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.439576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.439585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.439750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.439759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.440076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.440084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.440419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.440428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.440762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.440771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.441095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.441104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.441404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.441414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.441586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.441594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.441888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.441897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.442213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.442220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.442565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.442574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.442772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.442781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.443056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.443066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.443267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.443275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.604 [2024-11-05 16:59:50.443469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.604 [2024-11-05 16:59:50.443478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.604 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.443778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.443786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.444027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.444035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.444346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.444355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.444681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.444690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.444965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.444973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.445289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.445297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.445602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.445610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.445885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.445893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.446191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.446198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.446569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.446578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.446856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.446865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.447046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.447055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.447260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.447267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.447621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.447629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.447802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.447810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.448091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.448100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.448315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.448323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.448491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.448499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.448765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.448774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.449089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.449097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.449364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.449371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.449673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.449681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.449960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.449968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.450238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.450246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.450434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.450443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.450634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.450644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.450934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.450942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.451237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.451245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.451420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.451429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.451715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.451724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.452021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.452030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.452191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.452383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.452392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.605 [2024-11-05 16:59:50.452659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.605 [2024-11-05 16:59:50.452667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.605 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.452979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.452988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.453293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.453302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.453583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.453592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.453791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.453798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.454114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.454122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.454416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.454424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.454758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.454765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.454920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.454929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.455197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.455205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.455515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.455524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.455820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.455828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.456187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.456196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.456482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.456491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.456804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.456813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.457130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.457138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.457472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.457480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.457820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.457829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.458125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.458133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.458455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.458464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.458765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.458774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.459113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.459121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.459426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.459434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.459623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.459925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.459933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.460266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.460276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.460584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.460592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.460900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.460908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.461238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.461245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.461564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.461573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.461875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.461883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.462180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.462189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.462505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.462513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.462812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.462820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.463141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.463150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.463267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.463275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.463588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.463597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.463931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.463939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.464262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.464271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.606 qpair failed and we were unable to recover it. 00:35:43.606 [2024-11-05 16:59:50.464588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.606 [2024-11-05 16:59:50.464596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.464905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.464913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.465255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.465263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.465572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.465589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.465890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.465899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.466192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.466200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.466520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.466528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.466875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.466883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.467221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.467229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.467556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.467563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.467863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.467871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.468059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.468068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.468347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.468356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.468680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.468689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.469265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.469274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.469586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.469595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.469923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.469931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.470255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.470263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.470586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.470595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.470902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.470910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.471265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.471273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.471593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.471601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.471910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.471918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.472244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.472251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.472425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.472433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.472767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.472777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.473073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.473081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.473251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.473259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.473443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.473451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.473709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.473717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.474028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.474036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.474356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.474365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.474658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.474666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.474960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.474968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.475330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.475340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.475727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.475735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.476028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.476037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.607 qpair failed and we were unable to recover it. 00:35:43.607 [2024-11-05 16:59:50.476367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.607 [2024-11-05 16:59:50.476376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.476674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.476684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.476995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.477005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.477294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.477303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.477595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.477603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.477914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.477924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.478210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.478219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.478521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.478530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.478856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.478865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.479031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.479039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.479322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.479330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.479638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.479646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.479937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.479947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.480254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.480264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.480580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.480588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.480923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.480931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.481236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.481245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.481558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.481566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.481878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.481887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.482227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.482236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.482572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.482580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.482963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.482972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.483219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.483227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.483515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.483523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.483821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.483829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.484148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.484156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.484944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.484962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.485282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.485292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.485596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.485607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.485889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.485899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.486205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.486213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.486502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.486510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.486820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.486829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.487141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.487150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.487443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.487452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.487738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.487748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.487994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.488002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.488321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.488329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.488692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-11-05 16:59:50.488700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.608 qpair failed and we were unable to recover it. 00:35:43.608 [2024-11-05 16:59:50.488995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.489003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.489319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.489327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.489626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.489635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.489934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.489942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.490228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.490237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.490534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.490543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.490852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.490861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.491154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.491162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.491512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.491519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.491734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.491742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.492025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.492033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.492359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.492368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.492634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.492642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.492959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.492968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.493273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.493281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.493565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.493573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.493903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.493911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.494205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.494214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.494540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.494547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.494860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.494869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.495180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.495187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.495502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.495511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.495809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.495817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.496196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.496204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.496514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.496522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.496801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.496810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.497103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.497432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.497734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.497743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.497926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.498273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.498282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.498571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.498580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.498889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.498898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.499107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.499115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.609 [2024-11-05 16:59:50.499292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-11-05 16:59:50.499302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.609 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.499661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.499669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.499983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.499992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.500311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.500319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.500594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.500602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.500908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.500915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.501315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.501323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.501630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.501639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.501884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.502190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.502207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.502371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.502603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.502610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.502909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.503237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.503246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.503534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.503542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.503859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.504122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.504130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.504455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.504464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.504754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.504763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.504953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.504960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.505134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.505142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.505420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.505429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.505753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.505762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.506090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.506332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.506341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.506646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.506654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.506864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.506872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.507257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.507265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.507586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.507594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.507883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.507891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.508181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.508191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.508523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.508531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.508856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.508864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.509175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.509183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.509542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.509550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.509958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.509968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.510259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.510266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.510568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.510576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.510855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-11-05 16:59:50.510864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.610 qpair failed and we were unable to recover it. 00:35:43.610 [2024-11-05 16:59:50.511149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.511157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.511463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.511473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.511769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.511777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.512067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.512075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.512379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.512387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.512561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.512569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.512728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.512736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.513051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.513059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.513443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.513452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.513694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.513704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.514011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.514344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.514352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.514560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.514570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.514873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.514882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.515210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.515218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.515421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.515429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.515613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.515621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.515920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.515929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.516252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.516260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.516556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.516563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.516867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.516875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.517170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.517179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.517380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.517646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.517655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.518347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.518366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.518700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.518710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.519307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.519323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.519630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.519639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.520465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.520484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.520767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.520776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.521081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.521089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.521396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.521404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.521701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.521709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.522065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.522074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.522397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.522710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.522719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.522892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.522904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.523189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.523197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.611 qpair failed and we were unable to recover it. 00:35:43.611 [2024-11-05 16:59:50.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.611 [2024-11-05 16:59:50.523536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.523845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.523854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.524154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.524163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.524459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.524467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.524783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.524793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.525094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.525102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.525433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.525441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.525635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.525644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.525969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.525978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.526345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.526361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.526564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.526572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.526884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.526893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.527222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.527524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.527533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.527705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.527714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.527889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.527897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.528179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.528188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.528480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.528488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.528792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.528800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.529464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.529481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.529788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.529797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.530104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.530112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.530409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.530417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.530592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.530600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.530915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.530923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.531258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.531266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.531449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.531458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.531758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.531767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.531930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.531939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.532240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.532552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.532561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.532872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.532880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.533218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.533227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.533524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.533532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.533843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.533851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.534030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.534038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.534336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.534344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.534621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.534630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.534933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.612 [2024-11-05 16:59:50.534943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.612 qpair failed and we were unable to recover it. 00:35:43.612 [2024-11-05 16:59:50.535255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.535264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.535568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.535576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.535882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.535891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.536203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.536211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.536572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.536580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.536868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.536876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.537078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.537086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.537290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.537297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.537528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.537536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.537854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.537862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.538156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.538164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.538482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.538490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.539060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.539076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.539408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.539417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.539776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.539786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.540076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.540084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.540463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.540472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.540659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.540949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.540958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.541295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.541304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.541656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.541665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.541948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.541957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.542277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.542285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.542593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.542603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.542909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.542918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.543240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.543249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.543455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.543463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.543794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.543809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.544140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.544148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.544479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.544487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.544794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.544803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.545258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.545271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.545580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.545590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.545889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.546195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.546203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.546529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.546537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.546848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.546860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.613 [2024-11-05 16:59:50.547190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.613 [2024-11-05 16:59:50.547198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.613 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.547542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.547550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.547844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.547855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.548034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.548043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.548387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.548400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.548697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.548706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.549012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.549021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.549360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.549368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.549692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.549700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.550066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.550074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.550432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.550447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.550768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.550776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.551071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.551080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.551266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.551275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.551589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.551598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.552023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.552035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.552360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.552370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.552696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.552705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.553035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.553043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.553350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.553363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.553680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.553689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.554081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.554091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.554290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.554299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.554613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.554622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.554943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.554957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.555280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.555289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.555560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.555568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.555871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.555880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.556265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.556274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.556599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.556608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.556960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.556974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.557326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.557334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.557656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.557664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.557952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.557961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.558297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.614 [2024-11-05 16:59:50.558306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.614 qpair failed and we were unable to recover it. 00:35:43.614 [2024-11-05 16:59:50.558614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.558622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.558919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.558927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.559255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.559263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.559588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.559912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.559921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.560227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.560236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.560551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.560560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.560925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.560936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.561218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.561226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.561421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.561429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.561685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.561694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.562014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.562022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.562332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.562341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.562651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.562660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.562881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.562890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.563080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.563089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.563368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.563377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.563693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.563702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.564018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.564027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.564319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.564329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.564633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.564642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.564931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.564940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.565268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.565276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.565601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.565609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.565819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.565829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.566137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.566145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.566467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.566476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.566809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.566817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.567144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.567153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.567498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.567772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.567781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.568115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.568298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.568305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.568635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.568642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.568939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.568948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.569246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.569255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.569561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.569570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.569784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.615 [2024-11-05 16:59:50.570130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.615 [2024-11-05 16:59:50.570138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.615 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.570461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.570469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.570777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.570786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.571008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.571016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.571346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.571354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.571678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.571686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.571991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.572304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.572312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.572634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.572643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.572941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.572951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.573256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.573265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.573570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.573579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.573776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.573784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.574057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.574066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.574762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.574778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.575171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.575180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.575512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.575521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.575861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.575870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.576200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.576208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.576524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.576533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.576857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.576866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.577665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.577680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.577890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.577899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.578207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.578215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.578512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.578520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.578812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.578820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.579184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.579192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.579384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.579391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.579671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.579680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.579914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.579922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.580193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.580202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.580519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.580527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.580813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.580821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.581395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.581410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.581726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.581735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.582051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.582059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.582367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.582376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.582669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.582678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.583050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.616 [2024-11-05 16:59:50.583060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.616 qpair failed and we were unable to recover it. 00:35:43.616 [2024-11-05 16:59:50.583366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.583375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.583692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.583702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.584018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.584028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.584216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.584225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.584545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.584554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.584884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.585192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.585201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.585534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.585542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.585840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.585848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.586179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.586187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.586515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.586524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.586859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.586868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.587163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.587171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.587477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.587810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.587819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.588140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.588149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.588452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.588460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.588757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.588766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.589111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.589119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.589426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.589434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.589750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.589758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.590045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.590053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.590356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.590671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.590680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.590969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.590978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.591281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.591289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.591576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.591585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.591899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.591908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.592562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.592577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.592892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.593100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.593108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.593296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.593304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.593624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.593631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.593961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.593970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.594284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.594292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.594613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.594621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.594913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.594921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.595120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.595129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.617 qpair failed and we were unable to recover it. 00:35:43.617 [2024-11-05 16:59:50.595425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.617 [2024-11-05 16:59:50.595433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.595736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.595748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.596068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.596076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.596380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.596389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.596578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.596586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.596909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.596925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.597249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.597257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.597561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.597570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.597981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.597989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.598201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.598209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.598528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.598536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.598846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.598855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.599018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.599028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.599326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.599335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.599642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.599651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.599976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.599985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.600284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.600293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.600450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.600460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.600771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.600781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.601092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.601100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.601394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.601404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.601716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.601724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.602042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.602051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.602380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.602388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.602728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.602736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.603046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.603055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.603330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.603338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.603700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.603709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.604004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.604013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.604231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.604238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.604539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.604547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.604814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.604822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.605183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.605191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.605496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.605505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.618 qpair failed and we were unable to recover it. 00:35:43.618 [2024-11-05 16:59:50.605809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.618 [2024-11-05 16:59:50.605817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.606100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.606108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.606431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.606440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.607128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.607144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.607453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.607463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.607773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.607783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.608077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.608085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.608393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.608402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.608617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.608625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.608914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.608922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.609230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.609238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.609545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.609554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.609718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.609727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.610042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.610050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.610241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.610249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.610562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.610571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.610906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.610915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.611292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.611299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.611606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.611615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.611910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.611918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.612234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.612242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.612564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.612574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.612900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.612908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.613101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.613109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.613413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.613422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.613731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.613739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.614403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.614419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.614712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.614721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.615084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.615092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.615396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.615405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.615729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.616092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.616100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.616408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.616417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.616762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.616770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.617023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.617031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.617314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.617322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.617626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.617635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.617836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.617844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.618129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.619 [2024-11-05 16:59:50.618138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.619 qpair failed and we were unable to recover it. 00:35:43.619 [2024-11-05 16:59:50.618391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.618399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.618704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.618713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.619006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.619014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.619307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.619316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.619550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.619567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.619884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.619895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.620097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.620107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.620292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.620302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.620598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.620606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.620911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.620920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.621270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.621279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.621569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.621883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.621891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.622212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.622221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.622529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.622537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.622835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.622844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.623032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.623040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.623321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.623329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.623612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.623620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.623912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.623921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.624233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.624240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.624535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.624544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.624835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.624843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.625021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.625029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.625244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.625252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.625523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.625531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.625724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.625732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.626010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.626019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.626303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.626313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.626634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.626643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.626952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.626961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.627264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.627273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.627578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.627586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.627895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.627904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.628223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.628230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.628572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.628581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.628900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.628908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.629194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.629202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.620 qpair failed and we were unable to recover it. 00:35:43.620 [2024-11-05 16:59:50.629587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.620 [2024-11-05 16:59:50.629595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.629991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.629999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.630297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.630305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.630653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.630661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.631407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.631424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.631733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.631743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.632080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.632088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.632395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.632403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.632727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.632739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.633077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.633085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.633400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.633409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.633718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.633726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.634030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.634040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.634423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.634431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.635132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.635148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.635499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.635508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.636019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.636049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.636368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.636377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.636704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.636711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.637058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.637067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.637448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.637457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.637755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.637764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.638115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.638123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.638438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.638446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.638965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.638995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.639309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.639320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.639591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.639600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.639912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.639920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.640239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.640248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.640556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.640565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.640869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.640878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.641211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.641220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.641547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.641556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.642115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.642133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.642429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.642439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.642732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.642743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.643069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.643077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.621 [2024-11-05 16:59:50.643443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.621 [2024-11-05 16:59:50.643451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.621 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.643741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.643753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.644039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.644047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.644360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.644666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.644673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.644837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.645142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.645418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.645425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.645747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.645754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.646014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.646021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.646378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.646385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.646605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.646614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.646916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.646923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.647248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.647256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.647530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.647538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.647909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.647917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.648216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.648223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.648525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.648533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.648813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.648820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.649142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.649149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.649456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.649463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.649798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.649804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.650106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.650419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.650426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.650722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.650730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.651032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.651040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.651348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.651355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.651677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.651684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.652005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.652013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.652330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.652338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.652648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.899 [2024-11-05 16:59:50.652656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.899 qpair failed and we were unable to recover it. 00:35:43.899 [2024-11-05 16:59:50.652968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.652976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.653262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.653269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.653599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.653606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.653916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.653923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.654138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.654145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.654421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.654428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.654729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.654737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.654956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.654964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.655249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.655595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.655602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.655897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.655904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.656243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.656251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.656562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.656569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.656877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.656885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.657216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.657224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.657512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.657520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.657741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.657751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.657947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.657955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.658293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.658301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.658513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.658520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.658711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.658720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.659033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.659041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.659325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.659333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.659589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.659597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.659853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.659861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.660199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.660207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.660416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.660423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.660690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.660697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.661040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.661048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.661388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.661396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.661722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.661730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.661828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.661835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.662049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.662057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.662351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.662358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.662565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.662572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.662874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.662882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.663184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.663192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.663610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.900 [2024-11-05 16:59:50.663618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.900 qpair failed and we were unable to recover it. 00:35:43.900 [2024-11-05 16:59:50.663929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.663937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.664269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.664276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.664661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.664669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.664971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.664979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.665310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.665317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.665477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.665485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.665652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.665659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.666037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.666045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.666263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.666271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.666568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.666576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.666791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.666799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.667020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.667028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.667303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.667311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.667617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.667625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.667830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.667839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.668013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.668021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.668322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.668524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.668709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.668716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.668998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.669006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.669339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.669347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.669528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.669536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.669820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.669829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.670134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.670143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.670441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.670449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.670764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.670771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.671017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.671024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.671336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.671344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.671633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.671641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.671921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.671930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.672262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.672270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.672582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.672590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.672924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.672932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.673212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.673219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.673430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.673438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.673701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.673709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.673892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.673900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.674201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.674209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.901 [2024-11-05 16:59:50.674408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.901 [2024-11-05 16:59:50.674416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.901 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.674716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.674724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.674972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.674980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.675157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.675165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.675468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.675475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.675653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.675661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.675851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.675859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.676160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.676169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.676453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.676461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.676835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.676842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.677177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.677184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.677522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.677529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.677819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.677826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.678127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.678134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.678343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.678350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.678666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.678673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.678975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.678983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.679296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.679303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.679640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.679648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.679962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.679969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.680161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.680168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.680514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.680522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.680818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.680826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.681006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.681013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.681222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.681231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.681512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.681519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.681815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.681822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.682116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.682123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.682454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.682460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.682760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.682767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.683148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.683155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.683438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.683799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.683807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.684070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.684076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.684390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.684397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.684692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.684707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.684991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.684998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.685310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.685317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.685527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.902 [2024-11-05 16:59:50.685534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.902 qpair failed and we were unable to recover it. 00:35:43.902 [2024-11-05 16:59:50.685827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.686203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.686449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.686455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.686671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.686678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.686931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.686938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.687269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.687277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.687606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.687613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.687979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.687986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.688312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.688319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.688629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.688635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.688845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.688852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.689142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.689149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.689484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.689492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.689855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.689862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.690054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.690061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.690344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.690351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.690682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.690690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.690896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.690902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.691177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.691184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.691473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.691480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.691792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.691799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.692124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.692321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.692328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.692524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.692531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.692779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.692786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.693084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.693092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.693399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.693406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.693733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.693741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.694105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.694112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.694429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.694435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.694711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.694718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.695016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.695031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.695214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.903 [2024-11-05 16:59:50.695222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.903 qpair failed and we were unable to recover it. 00:35:43.903 [2024-11-05 16:59:50.695508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.695515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.695849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.695857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.696150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.696158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.696461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.696730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.696737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.697034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.697041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.697337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.697347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.697657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.697664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.697937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.698273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.698280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.698566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.698573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.698883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.698890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.699195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.699210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.699509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.699515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.699674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.699681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.700065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.700379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.700386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.700708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.700716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.701022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.701030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.701337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.701345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.701659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.701666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.701951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.701958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.702161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.702168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.702431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.702720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.702727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.703062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.703069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.703365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.703372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.703712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.704020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.704028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.704334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.704342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.704659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.704667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.705046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.705054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.705339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.705348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.705634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.705641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.705944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.706145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.706153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.706454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.706462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.706790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.706797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.904 [2024-11-05 16:59:50.706989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.904 [2024-11-05 16:59:50.706995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.904 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.707273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.707280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.707444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.707451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.707754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.707761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.708057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.708064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.708275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.708282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.708451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.708457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.708748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.708755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.709048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.709055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.709350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.709358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.709674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.709681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.710063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.710071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.710369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.710375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.710655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.710662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.710873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.710880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.711186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.711193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.711553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.711560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.711872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.711879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.712253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.712259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.712558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.712566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.712896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.712903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.713205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.713213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.713406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.713413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.713819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.713826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.714130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.714136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.714453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.714460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.714781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.714788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.714959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.715291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.715298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.715475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.715481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.715838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.715846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.716148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.716155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.716488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.716494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.716868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.716874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.717190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.717198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.717531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.717538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.717921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.717928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.718224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.718231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.905 [2024-11-05 16:59:50.718518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.905 [2024-11-05 16:59:50.718524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.905 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.718831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.718837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.719170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.719177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.719487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.719494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.719804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.719814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.720109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.720116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.720327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.720334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.720713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.720720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.721075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.721082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.721370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.721377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.721722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.721729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.722021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.722028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.722345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.722352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.722664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.722670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.722949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.722956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.723246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.723260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.723567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.723574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.723876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.723883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.724188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.724195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.724508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.724515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.724818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.724825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.725199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.725206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.725553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.725560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.725875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.725882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.726175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.726182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.726499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.726506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.726815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.726822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.727032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.727040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.727282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.727289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.727630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.727636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.727943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.727950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.728135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.728142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.728375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.728383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.728700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.728707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.729051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.729058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.729251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.729258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.729609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.729617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.729933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.729940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.730328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.730335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.906 qpair failed and we were unable to recover it. 00:35:43.906 [2024-11-05 16:59:50.730671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.906 [2024-11-05 16:59:50.730679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.731002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.731009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.731313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.731320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.731620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.731627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.731923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.731930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.732259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.732266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.732558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.732566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.732902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.732909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.733228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.733234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.733547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.733554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.733805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.733812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.734149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.734155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.734472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.734479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.734788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.734795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.735103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.735118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.735405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.735413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.735724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.735732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.735996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.736003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.736363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.736370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.736674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.736681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.736978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.736985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.737291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.737299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.737598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.737605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.737905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.737912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.738220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.738226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.738608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.738616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.738878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.738885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.739190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.739198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.739510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.739517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.739706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.739713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.740046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.740053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.740353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.740360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.740693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.740989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.740996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.741377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.741385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.741699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.741707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.741998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.742006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.742309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.907 [2024-11-05 16:59:50.742318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.907 qpair failed and we were unable to recover it. 00:35:43.907 [2024-11-05 16:59:50.742623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.742630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.742834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.742841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.743122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.743128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.743439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.743446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.743758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.743766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.744078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.744084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.744394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.744401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.744699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.744706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.745016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.745023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.745395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.745402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.745639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.745646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.745936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.745943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.746249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.746257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.746546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.746554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.746867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.746875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.747232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.747240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.747562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.747570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.747892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.747900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.748236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.748243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.748548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.748555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.748845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.748853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.749174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.749509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.749516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.749808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.749816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.750036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.750043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.750226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.750233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.750517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.750524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.750836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.750844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.751005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.751013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.751314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.751321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.751530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.751538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.908 qpair failed and we were unable to recover it. 00:35:43.908 [2024-11-05 16:59:50.751807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.908 [2024-11-05 16:59:50.751814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.752033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.752041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.752244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.752251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.752563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.752570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.752850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.752857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.753139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.753145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.753339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.753346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.753536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.753543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.753861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.754044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.754052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.754374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.754597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.754604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.754891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.754900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.755198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.755206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.755504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.755511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.755799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.755806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.755984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.755991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.756249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.756256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.756458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.756465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.756617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.756624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.756983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.756990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.757165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.757172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.757528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.757534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.757971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.757978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.758286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.758293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.758620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.758627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.758940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.758947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.759270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.759277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.759583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.759590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.759908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.759915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.760128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.760135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.760436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.760442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.760759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.760767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.761056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.761062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.761352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.761359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.761667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.761674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.761852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.761860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.909 [2024-11-05 16:59:50.762209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.909 [2024-11-05 16:59:50.762215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.909 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.762509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.762516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.762817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.762824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.763135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.763143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.763451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.763458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.763767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.763775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.764068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.764075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.764377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.764385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.764696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.764703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.764998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.765006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.765292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.765300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.765608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.765616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.765911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.765918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.766211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.766218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.766552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.766559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.766870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.766878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.767185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.767191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.767497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.767504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.767807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.767813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.768127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.768134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.768406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.768413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.768709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.768926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.768933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.769259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.769266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.769526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.769534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.769840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.769848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.770158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.770166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.770472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.770480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.770664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.770672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.770959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.770967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.771287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.771294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.771601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.771609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.771810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.771818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.772174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.772181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.772471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.772478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.772794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.910 [2024-11-05 16:59:50.772802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.910 qpair failed and we were unable to recover it. 00:35:43.910 [2024-11-05 16:59:50.773145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.773152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.773461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.773469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.773793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.773803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.774098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.774106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.774414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.774421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.774703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.774711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.775039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.775047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.775381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.775389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.775693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.775700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.776074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.776081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.776345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.776352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.776547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.776553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.776837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.776843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.777221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.777228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.777505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.777511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.777799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.777807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.778109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.778116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.778400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.778406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.778700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.778707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.779069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.779077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.779390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.779397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.779705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.779712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.780018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.780025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.780334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.780341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.780645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.780652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.780969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.780976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.781297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.781304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.781612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.781619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.781811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.781818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.782086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.782093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.782384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.782391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.782708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.782714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.783027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.783034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.783318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.783324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.783615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.783622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.783919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.783926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.784221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.911 [2024-11-05 16:59:50.784429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.911 [2024-11-05 16:59:50.784436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.911 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.784732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.784739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.785022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.785029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.785223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.785231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.785416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.785424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.785717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.785726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.786044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.786051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.786332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.786340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.786649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.786657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.786963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.786971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.787280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.787287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.787439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.787447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.787743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.787754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.788058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.788064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.788374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.788381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.788692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.788699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.788986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.788993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.789193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.789491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.789497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.789897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.790190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.790197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.790492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.790499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.790813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.790821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.791175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.791181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.791500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.791507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.791801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.791807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.792132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.792139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.792349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.792356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.792646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.792652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.792986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.792993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.912 [2024-11-05 16:59:50.793316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.912 [2024-11-05 16:59:50.793323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.912 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.793632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.793638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.793964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.793970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.794261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.794268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.794583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.794590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.794888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.794896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.795221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.795228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.795516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.795524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.795833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.795840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.796153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.796160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.796471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.796479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.796771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.796779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.797098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.797106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.797411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.797419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.797726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.797733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.798066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.798074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.798391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.798398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.798713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.798719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.799077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.799084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.799461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.799468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.799773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.799780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.800050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.800057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.800377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.800383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.800684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.800691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.801026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.801033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.801323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.801331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.801637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.801645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.801917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.801925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.802224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.802231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.802515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.802521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.802812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.802819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.802990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.802997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.803277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.803284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.913 [2024-11-05 16:59:50.803491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.913 [2024-11-05 16:59:50.803499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.913 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.803793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.803801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.804060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.804067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.804382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.804389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.804727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.804734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.805116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.805123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.805417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.805607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.805613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.805939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.805946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.806251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.806259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.806552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.806559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.806883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.806890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.807189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.807196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.807507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.807514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.807799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.807806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.808092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.808100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.808404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.808411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.808713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.808720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.808911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.808919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.809219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.809226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.809532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.809539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.809851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.809858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.810191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.810200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.810504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.810511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.810814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.810821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.811019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.811026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.811357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.811364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.811671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.811677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.811980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.811987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.812313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.812320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.812616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.812623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.812918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.812925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.813217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.813224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.813533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.813540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.813912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.813919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.814199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.814206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.814510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.914 [2024-11-05 16:59:50.814517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.914 qpair failed and we were unable to recover it. 00:35:43.914 [2024-11-05 16:59:50.814838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.814853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.815171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.815178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.815487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.815494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.815849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.815856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.816148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.816448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.816455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.816765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.816772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.817059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.817066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.817260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.817625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.817631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.817918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.817925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.818207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.818214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.818500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.818507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.818796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.818810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.819085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.819093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.819408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.819416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.819723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.819730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.820072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.820080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.820388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.820395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.820715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.821013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.821020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.821316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.821323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.821634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.821640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.821960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.821968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.822275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.822282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.822567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.822581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.822880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.822887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.823186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.823193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.823503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.823510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.823796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.823803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.824090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.824096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.824306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.824313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.824627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.824948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.824956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.825273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.825281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.825588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.825596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.915 [2024-11-05 16:59:50.825904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.915 [2024-11-05 16:59:50.825913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.915 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.826222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.826230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.826538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.826546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.826871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.826879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.827178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.827186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.827477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.827485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.827805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.827813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.828115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.828123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.828431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.828440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.828728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.828736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.829011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.829019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.829324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.829332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.829638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.829646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.829959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.829967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.830273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.830281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.831034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.831053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.831362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.831372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.831659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.831667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.831992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.832000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.832346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.832354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.832664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.832673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.832958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.832967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.833265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.833274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.833581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.833590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.833894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.833902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.834032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.834040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.834369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.834376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.834679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.834688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.834976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.834985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.835310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.835321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.835651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.835659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.835961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.835969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.836272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.836280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.836596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.836605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.836907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.836916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.837233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.837241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.837446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.837454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.837805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.837813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.916 [2024-11-05 16:59:50.838069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.916 [2024-11-05 16:59:50.838077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.916 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.838287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.838296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.838584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.838592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.838895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.838904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.839208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.839216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.839423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.839431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.839730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.839740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.840052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.840061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.840351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.840358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.840664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.840672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.840979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.840987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.841296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.841305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.841571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.841580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.841975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.841983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.842287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.842295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.842613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.842621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.842916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.842924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.843246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.843253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.843597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.843606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.843771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.843779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.844071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.844080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.844366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.844374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.844703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.844711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.845025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.845033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.845397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.845406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.845618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.845626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.845932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.845940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.846298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.846307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.846599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.846608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.846918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.846926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.847243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.847253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.847623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.847961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.847970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.848272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-11-05 16:59:50.848281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.917 qpair failed and we were unable to recover it. 00:35:43.917 [2024-11-05 16:59:50.848624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.848632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.848931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.848939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.849245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.849254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.849563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.849570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.849878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.849886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.850208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.850216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.850538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.850547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.850854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.850862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.851169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.851177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.851486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.851493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.851786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.851794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.852085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.852093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.852396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.852405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.852704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.852712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.852929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.852937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.853146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.853155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.853415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.853423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.853601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.853609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.853882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.853890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.854256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.854265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.854569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.854578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.854892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.854901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.855081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.855089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.855402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.855410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.855718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.855727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.855934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.856244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.856252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.856564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.856571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.856934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.856943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.857246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.857254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.857556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.857564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.857883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.857892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.858236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.858244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.858552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.858561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.858715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.858724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.859001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.859009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.859374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.859382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.859690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.859701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.918 [2024-11-05 16:59:50.859896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-11-05 16:59:50.859907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.918 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.860177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.860185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.860457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.860465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.860776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.860785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.861200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.861207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.861508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.861517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.861825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.861833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.862129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.862138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.862455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.862463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.862758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.862767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.863061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.863070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.863332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.863340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.863648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.863656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.863931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.863939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.864262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.864624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.864632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.864949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.864957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.865275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.865283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.865614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.865623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.865922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.865931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.866251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.866260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.866612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.866620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.866920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.866928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.867242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.867250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.867576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.867584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.867906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.867915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.868271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.868280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.868576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.868585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.868893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.869207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.869215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.869523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.869531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.869809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.870147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.870451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.870460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.870775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.871074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.871083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.871401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.871409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.871723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.871733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.872043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-11-05 16:59:50.872051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.919 qpair failed and we were unable to recover it. 00:35:43.919 [2024-11-05 16:59:50.872360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.872370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.872664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.872672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.872946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.872954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.873263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.873271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.873577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.873585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.873986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.873994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.874300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.874307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.874617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.874797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.874805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.875082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.875091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.875419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.875427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.875724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.875733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.876014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.876023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.876310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.876320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.876635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.876643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.876955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.876963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.877267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.877274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.877566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.877573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.877773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.877782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.878092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.878100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.878410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.878419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.878713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.878721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.878950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.878958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.879246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.879254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.879557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.879566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.879839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.879847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.880162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.880169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.880510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.880518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.880811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.880820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.881149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.881156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.881502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.881510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.881817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.881825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.882123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.882131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.882420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.882428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.882579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.882588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.882895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.882903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.883099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.883107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.883416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.883425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.920 qpair failed and we were unable to recover it. 00:35:43.920 [2024-11-05 16:59:50.883727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.920 [2024-11-05 16:59:50.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.883941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.883949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.884246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.884256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.884543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.884551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.884855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.884863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.885186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.885194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.885460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.885468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.885783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.885792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.886105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.886113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.886423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.886431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.886730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.886738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.887043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.887052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.887202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.887210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.887537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.887545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.887743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.887754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.888031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.888039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.888311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.888319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.888622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.888630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.888918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.888926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.889232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.889242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.889560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.889568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.889877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.889885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.890191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.890199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.890490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.890498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.890793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.890801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.891112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.891119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.891429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.891437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.891731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.891740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.892023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.892031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.892337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.892346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.892649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.892657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.892965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.892974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.893278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.893286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.893596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.893605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.893922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.893931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.894241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.894250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.894552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.894561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.894853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.894862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.895041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.921 [2024-11-05 16:59:50.895049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.921 qpair failed and we were unable to recover it. 00:35:43.921 [2024-11-05 16:59:50.895350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.895358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.895620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.895628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.895838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.895846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.896141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.896151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.896447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.896759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.896775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.897131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.897139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.897433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.897440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.897769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.897777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.898087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.898096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.898373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.898380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.898649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.898658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.898965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.898973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.899272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.899280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.899466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.899473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.899742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.899762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.900043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.900050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.900238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.900246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.900557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.900565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.900874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.900883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.901208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.901216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.901606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.901614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.901929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.901938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.902256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.902264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.902588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.902597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.902910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.902919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.903219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.903228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.903384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.903391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.903551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.903560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.903849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.904120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.904128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.904435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.904444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.904751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.904760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.905081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.905089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.922 qpair failed and we were unable to recover it. 00:35:43.922 [2024-11-05 16:59:50.905348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.922 [2024-11-05 16:59:50.905356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.905584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.905815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.905825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.906007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.906016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.906328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.906337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.906646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.906654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.907004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.907012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.907315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.907323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.907501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.907508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.907725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.908031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.908039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.908335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.908344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.908458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.908465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.908647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.908656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.908837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.908846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.909126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.909134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.909467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.909476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.909661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.909669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.909859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.909868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.910142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.910151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.910385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.910394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.910578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.910586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.910866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.910874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.911068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.911076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.911376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.911384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.911722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.911730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.912052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.912061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.912236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.912244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.912542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.912551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.912789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.912798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.912988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.912996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.913259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.913267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.913573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.913582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.913890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.913899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.914200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.914208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.914538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.914546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.914856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.914865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.915149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.915157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.915317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.923 [2024-11-05 16:59:50.915327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.923 qpair failed and we were unable to recover it. 00:35:43.923 [2024-11-05 16:59:50.915626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.915634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.915924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.915932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.916248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.916256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.916564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.916573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.916865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.917218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.917227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.917410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.917418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.917721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.917729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.918013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.918022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.918383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.918392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.918682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.918691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.918994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.919003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.919362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.919369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.919575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.919583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.919895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.919904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.920212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.920220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.920533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.920541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.920875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.920883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.921056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.921064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.921323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.921330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.921553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.921854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.922129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.922137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.922448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.922457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.922753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.922762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.923072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.923080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.923449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.923457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.923754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.923762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.924028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.924036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.924342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.924352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.924660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.924668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.924819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.924829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.925004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.925012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.925364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.925373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.925700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.925709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.926040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.926049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.926427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.926436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.926765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.926776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.924 [2024-11-05 16:59:50.927075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.924 [2024-11-05 16:59:50.927083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.924 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.927355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.927363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.927653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.927661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.927957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.927965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.928255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.928263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.928570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.928579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.928888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.928897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.929208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.929216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.929520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.929531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.929859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.929867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.930259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.930267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.930456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.930463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.930744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.930756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.931107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.931113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.931407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.931414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.931722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.931728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.931938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.931945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.932248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.932255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.932575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.932581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.932890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.932898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.933131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.933138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.933459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.933466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.933761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.933768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.934039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.934046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.934366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.934682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.934689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.935009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.935018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.935286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.935294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.935469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.935477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.935683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.935691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.935989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.935997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.936197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.936206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.936314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.936576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.936584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.936919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.936927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.937227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.937236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.937512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.937521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.937847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.937857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.938177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.925 [2024-11-05 16:59:50.938186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.925 qpair failed and we were unable to recover it. 00:35:43.925 [2024-11-05 16:59:50.938519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.938529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.938857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.938866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.939043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.939052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.939227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.939237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.939539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.939548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.939715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.939724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.939994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.940003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.940314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.940323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.940667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.940675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.941006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.941015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.941322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.941331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.941670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.941679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.941974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.941984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.942156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.942165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.942359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.942368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.942679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.942687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.942997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.943007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.943287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.943295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.943477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.943486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.943696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.943705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.943759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.943767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.944008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.944018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.944185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.944193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.944508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.944517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.944774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.944783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.945085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.945094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.945418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.945427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.945738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.945750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.946027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.946036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.946334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.946343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.946651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.946660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.946825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.946835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.947041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.947050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:43.926 [2024-11-05 16:59:50.947362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.926 [2024-11-05 16:59:50.947370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:43.926 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.947685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.947695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.948009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.948019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.948350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.948359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.948666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.948675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.948989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.948998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.949294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.949303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.949607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.949618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.949983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.949992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.950321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.950331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.950500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.950508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.950780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.950789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.951125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.951134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.951422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.951431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.951736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.951751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.952062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.952069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.952390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.952399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.952739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.952751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.953060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.953069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.953383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.953769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.953778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.954079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.954088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.954401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.954409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.954705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.954714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.955023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.955031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.955309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.955575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.955583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.955941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.955949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.956150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.956158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.956340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.956348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.201 [2024-11-05 16:59:50.956636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.201 [2024-11-05 16:59:50.956643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.201 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.957014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.957021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.957304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.957312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.957646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.957655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.957982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.957990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.958324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.958333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.958554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.958562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.958824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.958832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.959110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.959118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.959302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.959310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.959627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.959636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.960057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.960065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.960388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.960397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.960684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.960693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.960978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.960987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.961321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.961329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.961643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.961930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.961941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.962112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.962122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.962414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.962423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.962727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.962736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.963022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.963032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.963317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.963326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.963503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.963512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.963831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.964087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.964096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.964286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.964294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.964569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.964577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.964890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.964898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.965207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.965215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.965578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.965585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.965873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.965882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.966180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.966189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.966476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.966485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.966799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.966807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.967145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.967153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.967472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.967481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.967790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.967799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.968120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.968464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.968472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.968756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.968764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.969064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.969072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.969364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.969372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.969642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.969962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.969971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.970264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.970273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.970604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.970612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.970798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.970807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.971114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.971122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.971453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.971462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.971764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.971773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.972075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.972083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.972351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.972359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.972648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.972656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.973440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.202 [2024-11-05 16:59:50.973457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.202 qpair failed and we were unable to recover it. 00:35:44.202 [2024-11-05 16:59:50.973798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.973808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.974020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.974028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.974325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.974335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.974619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.974629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.974962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.974971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.975282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.975290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.975597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.975606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.975832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.975841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.976155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.976163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.976446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.976453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.976785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.976794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.977101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.977109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.977415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.977423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.977729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.977738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.978127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.978135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.978431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.978440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.978791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.978799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.979064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.979071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.979370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.979378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.979696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.979704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.979982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.979990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.980263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.980271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.980536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.980545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.980731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.980739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.980955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.981132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.981140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.981462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.981471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.981798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.981806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.982083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.982091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.982407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.982415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.982729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.982738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.983071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.983081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.983397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.983405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.983727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.983736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.984066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.984075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.984262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.984270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.984600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.984608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.984911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.984919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.985240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.985249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.985445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.985453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.985762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.985770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.986092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.986354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.986365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.986742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.986753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.987047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.987055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.987367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.987375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.987685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.987694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.988016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.988026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.988331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.988339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.988657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.988665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.988996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.989005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.989380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.989388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.989694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.989702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.990010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.990019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.990317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.990327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.990622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.203 [2024-11-05 16:59:50.990631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.203 qpair failed and we were unable to recover it. 00:35:44.203 [2024-11-05 16:59:50.990865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.990874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.991193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.991202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.991439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.991733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.991741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.992066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.992075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.992380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.992389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.992696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.992703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.993035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.993045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.993370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.993379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.993696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.993704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.993923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.993932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.994329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.994336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.994658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.994666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.994960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.994970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.995264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.995271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.995442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.995451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.995655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.995663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.995949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.995958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.996290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.996298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.996585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.996593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.996921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.996930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.997251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.997259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.997556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.997563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.997905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.997913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.998195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.998204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.998511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.998519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.998832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.998841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.999171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.999181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.999386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.999394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.999566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.999574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:50.999919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:50.999929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.000275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.000283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.000555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.000564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.000887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.000896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.001204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.001213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.001497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.001505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.001810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.001818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.002131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.002138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.002437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.002446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.002736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.002745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.003052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.003061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.003402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.003410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.003617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.003624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.003910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.003919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.004196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.004205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.004516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.004525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.004818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.004827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.005177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.005185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.204 [2024-11-05 16:59:51.005481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.204 [2024-11-05 16:59:51.005488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.204 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.005683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.005691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.005975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.005983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.006260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.006267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.006567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.006575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.006886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.006902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.007203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.007211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.007521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.007530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.007903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.007912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.008213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.008221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.008614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.008780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.008790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.009094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.009102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.009284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.009292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.009558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.009567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.009859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.010175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.010184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.010557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.010566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.010858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.010867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.011206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.011214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.011521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.011529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.011839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.011848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.012133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.012140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.012468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.012476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.012788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.012796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.013054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.013062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.013351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.013359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.013688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.013696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.014039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.014048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.014344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.014352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.014642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.014650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.014963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.014971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.015126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.015134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.015397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.015406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.015730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.015739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.015921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.015929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.016221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.016228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.016540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.016867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.016875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.017213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.017222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.017528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.017536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.018453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.018472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.018679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.018688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.018987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.018996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.019323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.019332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.019584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.019592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.019932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.019940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.020242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.020250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.020555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.020565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.020914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.020922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.021217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.021225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.021511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.021519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.021863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.021871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.022172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.022180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.205 [2024-11-05 16:59:51.022507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.205 [2024-11-05 16:59:51.022515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.205 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.022837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.022847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.023153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.023161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.023463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.023472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.023770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.023780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.024045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.024053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.024373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.024382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.024688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.024696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.024992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.025001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.025310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.025634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.025643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.025808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.025818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.026009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.026018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.026361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.026370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.026722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.026731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.027050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.027060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.027347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.027356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.027679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.027688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.027848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.027859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.028182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.028191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.028502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.028510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.028801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.028810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.029139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.029148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.029431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.029440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.029755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.029764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.030071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.030079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.030380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.030388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.030682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.030691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.031031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.031040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.031369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.031378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.031672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.031681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.032003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.032013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.032326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.032335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.032619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.032629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.032912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.032921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.033228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.033237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.033537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.033546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.033882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.033892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.034211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.034220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.034528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.034537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.034808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.034816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.035064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.035073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.035275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.035283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.035542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.035550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.035857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.035867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.036205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.036212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.036388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.036397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.036681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.036690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.036962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.036971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.037293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.037302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.037609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.037618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.037826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.037833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.038009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.038017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.038209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.038218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.038537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.206 [2024-11-05 16:59:51.038545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.206 qpair failed and we were unable to recover it. 00:35:44.206 [2024-11-05 16:59:51.038810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.038819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.039072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.039081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.039388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.039397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.039721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.039730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.040006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.040015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.040324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.040331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.040654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.040662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.041026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.041035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.041218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.041227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.041498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.041505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.041822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.041831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.042019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.042026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.042322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.042330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.042637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.042645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.042967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.042975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.043246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.043254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.043557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.043565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.043870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.043878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.044191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.044200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.044503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.044511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.044816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.044825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.045121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.045129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.045436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.045444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.045755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.045764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.046077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.046087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.046359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.046367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.046655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.046663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.046966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.046976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.047693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.047997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.048009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.048336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.048658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.048983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.048992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.049297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.049305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.049591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.049599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.049903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.049912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.050215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.050223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.050501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.050509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.050787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.050795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.051120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.051128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.051436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.051444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.051760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.051770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.052105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.052113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.052420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.052428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.052736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.052744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.053037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.053327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.053335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.053642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.053659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.053956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.207 [2024-11-05 16:59:51.053964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.207 qpair failed and we were unable to recover it. 00:35:44.207 [2024-11-05 16:59:51.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.054249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.054552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.054561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.054885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.054894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.055206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.055214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.055525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.055533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.055822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.055830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.056152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.056160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.056463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.056472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.057314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.057331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.057648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.057657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.057986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.057994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.058291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.058299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.058607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.058617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.058794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.058803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.059091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.059100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.059402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.059412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.059596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.059606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.059871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.059879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.060181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.060189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.060496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.060505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.060815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.060827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.061495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.061514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.061809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.061818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.062182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.062191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.062496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.062504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.062819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.062827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.063158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.063166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.063474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.063482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.063795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.063804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.064079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.064087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.064378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.064387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.064714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.064723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.065004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.065013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.065217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.065225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.065494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.065502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.065805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.065813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.065993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.066002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.066211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.066220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.066519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.066527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.066730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.066738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.067052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.067061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.067355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.067364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.067636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.067644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.067951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.068265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.068273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.068582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.068591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.068902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.068911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.069219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.069228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.069511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.069519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.069864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.069873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.070125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.070134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.070441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.070450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.070762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.070771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.071054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.071063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.071375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.071383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.208 [2024-11-05 16:59:51.071674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.208 [2024-11-05 16:59:51.071683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.208 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.071900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.071910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.072237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.072245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.072559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.072568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.072669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.072677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.072968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.072976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.073267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.073275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.073576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.073584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.073767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.073777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.074086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.074094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.074388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.074397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.074704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.074712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.074931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.074939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.075246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.075254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.075567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.075575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.075891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.075900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.076205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.076214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.076519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.076526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.076819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.076827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.077131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.077139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.077446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.077455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.077752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.077762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.078059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.078068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.078352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.078361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.078546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.078555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.078862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.078872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.079189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.079198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.079522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.079531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.079820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.079828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.080163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.080171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.080365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.080374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.080563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.080571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.080869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.080880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.081210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.081218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.081554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.081563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.081881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.081890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.082205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.082214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.082522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.082530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.082861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.082870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.083208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.083216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.083526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.083534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.083815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.083823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.084151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.084159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.084488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.084497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.084802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.084810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.085139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.085148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.085333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.085342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.085659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.085667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.086018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.086027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.086314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.086322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.086629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.086637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.086947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.086956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.087260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.087267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.087574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.087582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.087815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.209 [2024-11-05 16:59:51.087824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.209 qpair failed and we were unable to recover it. 00:35:44.209 [2024-11-05 16:59:51.088110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.088118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.088422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.088431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.088770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.088779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.089144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.089152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.089493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.089502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.089799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.089807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.090138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.090147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.090454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.090462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.090738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.090753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.091161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.091170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.091445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.091453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.091756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.091765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.092056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.092064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.092371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.092379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.092701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.092710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.092888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.092897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.093109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.093118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.093395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.093406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.093706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.093715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.094024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.094033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.094324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.094333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.094638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.094647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.094915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.094924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.095226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.095235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.095531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.095541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.095816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.095826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.096140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.096149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.096457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.096466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.096778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.096786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.097105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.097113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.097305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.097313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.097636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.097645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.097968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.097976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.098175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.098183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.098466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.098474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.098687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.098694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.098875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.098883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.099172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.099179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.099387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.099395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.099700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.099708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.100020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.100030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.100340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.100349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.100643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.100652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.100947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.100955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.101224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.101233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.101538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.101547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.101865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.101874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.102193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.102201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.102523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.102531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.102812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.210 [2024-11-05 16:59:51.102821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.210 qpair failed and we were unable to recover it. 00:35:44.210 [2024-11-05 16:59:51.103147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.103155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.103438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.103447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.103768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.103777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.104043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.104051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.104342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.104350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.104661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.104669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.104965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.104973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.105281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.105298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.105485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.105863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.105871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.106117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.106125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.106427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.106435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.106726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.106734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.107048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.107057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.107267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.107274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.107568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.107576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.107863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.107872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.108167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.108175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.108325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.108333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.108643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.108653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.108933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.108941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.109237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.109245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.109433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.109441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.109740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.109751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.110043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.110051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.110337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.110345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.110687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.110695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.110989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.110998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.111349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.111358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.111653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.111662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.111934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.111943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.112298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.112306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.112584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.112592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.112859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.112867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.113200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.113209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.113515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.113524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.113813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.113821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.114178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.114186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.114471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.114782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.114790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.115069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.115078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.115386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.115395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.115669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.115677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.115969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.115978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.116301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.116308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.116605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.116615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.116921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.117220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.117233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.117558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.117567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.117829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.117837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.118065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.118073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.118369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.118377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.118664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.211 [2024-11-05 16:59:51.118672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.211 qpair failed and we were unable to recover it. 00:35:44.211 [2024-11-05 16:59:51.118939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.118948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.119268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.119277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.119582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.119590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.119971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.119979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.120329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.120337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.120589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.120597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.120908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.120917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.121240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.121249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.121630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.121638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.121923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.121931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.122230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.122238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.122576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.122583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.122888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.122896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.123188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.123197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.123513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.123522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.123826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.123835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.124000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.124008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.124392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.124400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.124696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.124704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.124994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.125003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.125311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.125320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.125647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.125656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.125981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.125990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.126294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.126302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.126599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.126607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.126897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.127280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.127288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.127578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.127585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.127890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.127898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.128218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.128227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.128530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.128538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.128822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.128830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.129148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.129156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.129433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.129441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.129765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.129775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.130056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.130064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.130372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.130381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.130689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.130698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.131052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.131060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.131363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.131372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.131674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.131682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.131900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.131908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.132209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.132219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.132539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.132548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.132755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.132765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.132972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.132980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.133245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.133252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.133568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.133576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.133892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.133902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.134237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.134245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.134540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.134548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.134862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.134870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.135183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.135191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.135523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.135530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.135763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.135771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.212 [2024-11-05 16:59:51.136056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.212 [2024-11-05 16:59:51.136064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.212 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.136361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.136370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.136653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.136661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.136967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.136976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.137271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.137279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.137541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.137549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.137838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.137846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.138154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.138163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.138535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.138840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.138849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.139142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.139151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.139459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.139468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.139771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.139780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.140085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.140093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.140279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.140288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.140565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.140574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.140914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.140923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.141240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.141248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.141425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.141433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.141596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.141607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.141909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.141917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.142228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.142237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.142561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.142569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.142893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.143203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.143211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.143464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.143472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.143678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.143687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.144012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.144021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.144314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.144322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.144625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.144633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.144935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.144943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.145257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.145265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.145553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.145561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.145707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.145716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.145992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.146000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.146279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.146610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.146618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.146933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.146942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.147227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.147235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.147547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.147556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.147856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.147864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.148161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.148170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.148460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.148467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.148647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.148655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.148977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.148985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.149249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.149257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.149581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.149589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.149793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.149801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.150075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.150083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.150386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.150394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.150684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.150693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.151014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.151022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.151327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.151335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.213 [2024-11-05 16:59:51.151638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.213 [2024-11-05 16:59:51.151646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.213 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.151934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.151942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.152257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.152265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.152556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.152565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.152856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.152864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.153080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.153088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.153358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.153368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.153660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.153668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.153975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.153983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.154295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.154303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.154599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.154607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.154799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.154808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.155111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.155119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.155436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.155444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.155781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.155789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.156128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.156136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.156472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.156480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.156778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.156786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.157091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.157099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.157426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.157435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.157633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.157641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.157951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.157960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.158232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.158241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.158532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.158540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.158860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.159189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.159198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.159503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.159512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.159794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.159802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.160121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.160128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.160415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.160423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.160605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.160613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.160921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.160929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.161249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.161257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.161465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.161473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.161787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.161795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.162157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.162164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.162490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.162772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.162780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.163131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.163139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.163448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.163458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.163725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.163734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.164073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.164081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.164302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.164311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.164639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.164647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.164938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.164947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.165255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.165264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.165562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.165572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.165994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.214 [2024-11-05 16:59:51.166288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.214 [2024-11-05 16:59:51.166296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.214 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.166585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.166593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.166900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.166909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.167185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.167193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.167482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.167490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.167818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.167826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.168134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.168143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.168481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.168490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.168799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.168808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.169146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.169154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.169523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.169532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.169824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.169832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.170114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.170122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.170425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.170433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.170742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.170752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.171043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.171051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.171369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.171377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.171666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.171675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.171983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.171993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.172264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.172273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.172579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.172588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.172926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.172936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.173269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.173278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.173567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.173575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.173959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.173968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.174274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.174283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.174587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.174597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.174949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.174958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.175276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.175285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.175471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.175481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.175808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.175816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.176085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.176396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.176404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.176694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.176702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.176981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.176990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.177289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.177297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.177614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.177623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.177687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.177694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.177984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.177995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.178296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.178304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.178578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.178586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.178879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.178887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.179178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.179186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.179492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.179501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.179812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.179821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.180157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.180164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.180494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.180502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.180809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.180817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.181103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.181111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.181417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.181424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.181713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.181721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.182050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.182059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.182280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.182288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.182617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.182625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.215 [2024-11-05 16:59:51.182808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.215 [2024-11-05 16:59:51.182816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.215 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.183103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.183111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.183299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.183316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.183620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.183629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.183928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.183936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.184125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.184133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.184467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.184476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.184654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.184662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.184853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.184861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.185163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.185172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.185465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.185473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.185775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.185783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.186081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.186089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.186404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.186411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.186759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.186767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.187112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.187121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.187432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.187441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.187748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.187756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.188106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.188115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.188440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.188449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.188751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.188760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.189059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.189067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.189241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.189544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.189552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.189869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.190164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.190172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.190501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.190510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.190786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.190794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.190979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.190987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.191315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.191322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.191678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.191687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.191946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.191954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.192172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.192180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.192351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.192360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.192654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.192662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.192846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.192854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.193126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.193413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.193420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.193637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.193645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.193814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.193822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.194106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.194115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.194364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.194372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.194668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.194676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.194946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.194954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.195274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.195281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.195585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.195593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.195921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.195929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.196233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.196240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.196419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.196429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.196761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.196770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.197097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.197105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.197444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.197452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.197762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.198074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.198082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.216 [2024-11-05 16:59:51.198425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.216 [2024-11-05 16:59:51.198433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.216 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.198739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.198751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.199073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.199411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.199419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.199817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.199825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.200118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.200448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.200456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.200635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.200643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.200938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.200947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.201132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.201140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.201433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.201443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.201726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.201735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.202047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.202055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.202364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.202372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.202526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.202535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.202826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.202834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.203161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.203169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.203471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.203479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.203812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.203821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.204136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.204144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.204448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.204457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.204716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.204724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.205028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.205036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.205333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.205341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.205683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.205691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.205980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.205988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.206285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.206293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.206490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.206499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.206791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.206800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.207095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.207103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.207397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.207406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.207712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.207721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.208022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.208031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.208367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.208376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.208698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.208706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.208955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.208962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.209294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.209302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.209479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.209487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.209760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.209769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.210026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.210034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.210388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.210716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.210724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.211041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.211051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.211206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.211214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.211650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.211658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.211952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.211960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.212294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.212302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.212611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.212620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.212810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.212818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.212996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.217 [2024-11-05 16:59:51.213003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.217 qpair failed and we were unable to recover it. 00:35:44.217 [2024-11-05 16:59:51.213289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.213300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.213580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.213589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.213888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.213898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.214238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.214246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.214538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.214546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.214905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.214914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.215227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.215407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.215415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.215718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.215727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.216085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.216417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.216426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.216733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.216742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.217042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.217051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.217328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.217337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.217667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.217676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.217985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.217995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.218333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.218342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.218678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.218687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.219037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.219045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.219384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.219393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.219585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.219595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.219867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.219875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.220175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.220182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.220505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.220512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.220804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.220812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.221016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.221024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.221313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.221321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.221661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.221669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.221959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.221967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.222272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.222279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.222568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.222576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.222909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.222918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.223092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.223100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.223318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.223325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.223608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.223616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.223926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.224231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.224239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.224542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.224550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.224862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.224870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.225024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.225032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.225305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.225314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.225629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.225637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.225935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.225944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.226240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.226248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.226556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.226564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.226873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.226881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.227208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.227217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.227538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.227547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.227853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.227861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.228185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.228193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.228551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.228559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.228848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.228856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.229183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.229191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.229498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.218 [2024-11-05 16:59:51.229506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.218 qpair failed and we were unable to recover it. 00:35:44.218 [2024-11-05 16:59:51.229794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.229802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.230094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.230102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.230401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.230409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.230740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.230751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.231043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.231051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.231355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.231363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.231672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.231680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.231987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.231997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.232283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.232292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.232578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.232587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.232889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.232898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.233186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.233195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.233518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.233527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.233858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.233869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.234194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.234203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.234525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.234533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.234857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.234865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.235151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.235159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.235452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.235460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.235758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.235766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.236052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.236061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.236356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.236365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.236634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.236643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.236971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.236981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.237164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.237174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.237502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.237510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.237799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.237807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.237993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.238000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.238298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.238306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.238611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.238619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.238912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.238920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.239233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.239241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.239531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.239539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.239837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.239845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.240156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.240164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.240469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.240477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.240767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.240775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.241136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.241144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.241466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.241474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.241779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.241787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.242120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.242128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.242448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.242456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.242749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.242758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.243031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.243038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.243329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.243337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.243633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.243641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.243942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.243951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.244257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.244265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.244589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.244597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.244811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.244819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.245142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.245150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.245436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.245444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.245770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.219 [2024-11-05 16:59:51.245778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.219 qpair failed and we were unable to recover it. 00:35:44.219 [2024-11-05 16:59:51.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.246103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.246460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.246469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.246793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.246802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.247073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.247081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.247376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.247384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.247678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.247686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.247990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.247999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.248288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.248296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.248584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.248592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.248895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.249177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.249185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.249473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.249481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.249783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.249791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.250083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.250091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.250397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.250405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.250725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.250733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.251022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.251031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.220 [2024-11-05 16:59:51.251319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.220 [2024-11-05 16:59:51.251327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.220 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.251703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.251713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.252017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.252026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.252331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.252340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.252644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.252653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.252935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.252943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.253239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.253247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.253549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.253557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.253866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.253875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.254180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.254188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.254477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.254485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.254829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.254838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.255059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.255067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.255378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.255385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.255677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.255685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.255993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.256001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.256325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.256333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.256637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.256646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.256932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.256942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.257252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.257261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.257537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.257546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.257903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.257911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.258203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.258211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.258518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.258527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.258825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.258833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.259139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.259147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.259450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.259458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.259761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.259769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.260061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.260068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.260369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.260378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.260749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.260758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.495 qpair failed and we were unable to recover it. 00:35:44.495 [2024-11-05 16:59:51.260975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.495 [2024-11-05 16:59:51.260984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.261275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.261284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.261600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.261609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.261761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.261770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.262050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.262058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.262355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.262363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.262641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.262649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.262961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.262969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.263277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.263285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.263608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.263615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.263918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.263926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.264248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.264256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.264563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.264571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.264867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.264875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.265189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.265197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.265560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.265568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.265755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.265763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.266025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.266034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.266331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.266340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.266644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.266652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.266922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.266932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.267230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.267239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.267545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.267554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.267879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.267889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.268230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.496 [2024-11-05 16:59:51.268239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.496 qpair failed and we were unable to recover it. 00:35:44.496 [2024-11-05 16:59:51.268543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.268553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.268914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.268922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.269082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.269091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.269403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.269411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.269737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.269750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.270044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.270053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.270352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.270360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.270707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.270717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.271050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.271058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.271237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.271245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.271537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.271545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.271819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.271828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.272052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.272060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.272322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.272329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.272630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.272638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.272925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.272933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.273236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.273245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.273550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.273558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.273858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.273866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.274185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.274193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.274514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.274522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.274717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.274726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.275021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.275029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.275322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.275330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.275620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.275628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.497 [2024-11-05 16:59:51.275987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.497 [2024-11-05 16:59:51.275995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.497 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.276302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.276311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.276487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.276497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.276811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.276819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.276977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.276985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.277287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.277295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.277600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.277608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.277906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.277914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.278219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.278227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.278416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.278424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.278748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.278757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.279124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.279132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.279348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.279356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.279645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.279652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.279983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.279992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.280313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.280321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.280624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.280632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.280919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.280927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.281205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.281213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.281503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.281510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.281853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.281861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.282185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.282193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.282498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.282508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.282795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.282803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.282994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.283001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.498 [2024-11-05 16:59:51.283275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.498 [2024-11-05 16:59:51.283283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.498 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.283600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.283607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.283928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.283937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.284243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.284251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.284541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.284549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.284852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.284860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.285194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.285202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.285382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.285390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.285682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.285700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.285998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.286006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.286289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.286298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.286647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.286655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.286831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.286840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.287046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.287054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.287354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.287362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.287645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.287653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.287923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.287931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.288235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.288243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.288564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.288572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.288851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.288859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.289180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.289189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.289493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.289502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.289790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.289799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.290108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.290116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.290307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.290316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.499 qpair failed and we were unable to recover it. 00:35:44.499 [2024-11-05 16:59:51.290650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.499 [2024-11-05 16:59:51.290658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.290957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.290965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.291228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.291236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.291499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.291507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.291787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.291796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.292117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.292126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.292430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.292438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.292729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.292737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.293059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.293067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.293356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.293364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.293668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.293956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.293964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.294258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.294268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.294566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.294574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.294876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.294884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.295168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.295176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.295479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.295487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.295663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.295671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.295954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.295962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.296253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.296261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.296570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.296578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.296891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.296900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.297212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.297220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.297506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.297515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.297832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.297841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.500 [2024-11-05 16:59:51.298164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.500 [2024-11-05 16:59:51.298172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.500 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.298478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.298486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.298812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.298820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.299129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.299137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.299460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.299468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.299771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.299779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.300075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.300084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.300393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.300401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.300722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.300730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.301049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.301059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.301362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.301370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.301676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.301683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.301990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.301999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.302306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.302313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.302606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.302613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.302918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.302927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.303229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.303238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.303547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.303555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.303879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.303888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.304184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.304481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.304489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.501 qpair failed and we were unable to recover it. 00:35:44.501 [2024-11-05 16:59:51.304798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.501 [2024-11-05 16:59:51.304806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.305127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.305135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.305418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.305426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.305754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.305763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.306039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.306048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.306374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.306382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.306688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.306698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.307020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.307029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.307331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.307339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.307628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.307636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.307919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.307928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.308240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.308248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.308554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.308562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.308879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.308888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.309198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.309207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.309521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.309830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.309838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.310125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.310133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.310445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.310453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.310780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.310789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.311090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.311098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.311432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.311441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.311737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.311750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.312028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.312036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.312338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.312346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.312636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.312644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.312897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.312905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.313227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.313235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.313547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.313556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.313845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.313854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.314129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.314137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.314431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.314439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.314733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.314742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.315081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.315089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.315394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.502 [2024-11-05 16:59:51.315403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.502 qpair failed and we were unable to recover it. 00:35:44.502 [2024-11-05 16:59:51.315789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.315797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.316099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.316107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.316395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.316403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.316710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.316718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.317032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.317042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.317352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.317360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.317688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.317696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.317996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.318004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.318301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.318309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.318625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.318634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.318994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.319003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.319296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.319305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.319591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.319600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.319940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.319948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.320275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.320284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.320579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.320588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.320985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.320995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.321300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.321308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.321637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.321645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.321955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.321963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.322260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.322268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.322572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.322581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.322870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.322878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.323192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.323200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.323526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.323534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.323713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.323723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.324027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.324036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.324347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.324356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.324679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.324688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.324988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.324997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.325316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.325325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.325636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.325645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.325975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.325985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.326279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.326287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.326463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.326473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.326787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.326795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.327103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.327111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.327370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.503 [2024-11-05 16:59:51.327378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.503 qpair failed and we were unable to recover it. 00:35:44.503 [2024-11-05 16:59:51.327543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.327555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.327866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.327874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.328029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.328038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.328334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.328342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.328549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.328557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.328842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.328851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.329137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.329145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.329490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.329498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.329821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.329829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.330189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.330198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.330375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.330383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.330686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.330694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.330989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.330997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.331256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.331266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.331545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.331552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.331863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.331871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.332209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.332217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.332519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.332527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.332703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.332711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.333026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.333035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.333397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.333405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.333699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.333707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.333906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.333915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.334179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.334187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.334488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.334495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.334690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.334698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.335015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.335024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.335323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.335331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.335632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.335641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.504 [2024-11-05 16:59:51.335940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.504 [2024-11-05 16:59:51.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.504 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.336249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.336258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.336614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.336623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.336804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.336813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.337113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.337122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.337421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.337430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.337722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.337731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.338043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.338052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.338356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.338366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.338650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.338659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.338975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.338984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.339309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.339318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.339656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.339665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.339952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.339961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.340282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.340291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.340577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.340586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.340892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.340902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.341263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.341272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.341566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.341575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.341878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.342195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.342203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.342485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.342493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.342802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.342810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.343120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.343127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.343468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.343478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.505 [2024-11-05 16:59:51.343762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.505 [2024-11-05 16:59:51.343770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.505 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.344024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.344032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.344357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.344365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.344661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.344669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.344930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.344939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.345244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.345252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.345604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.345613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.345923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.345931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.346221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.346229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.346534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.346542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.346843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.346851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.347235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.347243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.347526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.347533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.347839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.347848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.348173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.348181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.348483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.348492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.348782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.348791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.349023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.349032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.349263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.349271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.349591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.349599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.349889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.349897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.350199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.350207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.350530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.350538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.350835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.350844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.351136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.351144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.351438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.351446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.506 [2024-11-05 16:59:51.351733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.506 [2024-11-05 16:59:51.351742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.506 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.352019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.352028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.352335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.352343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.352656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.352663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.352974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.352982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.353278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.353286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.353463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.353471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.353762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.353770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.354117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.354126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.354467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.354476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.354759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.354767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.355119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.355127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.355414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.355422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.355731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.355740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.356069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.356078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.356426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.356433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.356729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.356737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.357045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.357053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.357347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.357355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.357658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.357666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.357964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.507 [2024-11-05 16:59:51.357973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.507 qpair failed and we were unable to recover it. 00:35:44.507 [2024-11-05 16:59:51.358286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.358294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.358615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.358623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.358928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.359235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.359243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.359538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.359547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.359886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.359895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.360177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.360186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.360471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.360480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.360794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.360803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.361098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.361105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.361409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.361418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.361742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.361757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.362058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.362066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.362360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.362368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.362675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.362683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.362966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.362975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.363284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.363292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.363617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.363624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.363919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.363927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.364226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.364234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.364545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.364553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.364851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.364859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.365175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.365183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.365472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.365480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.365854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.365864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.508 qpair failed and we were unable to recover it. 00:35:44.508 [2024-11-05 16:59:51.366229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.508 [2024-11-05 16:59:51.366237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.366533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.366541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.366845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.366853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.367162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.367169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.367458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.367466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.367778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.367786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.367955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.367963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.368270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.368279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.368572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.368580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.368887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.368896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.369269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.369277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.369546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.369554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.369870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.369878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.370180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.370188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.370478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.370487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.370795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.370804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.371133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.371141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.371332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.371608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.371616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.371770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.371778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.372045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.372052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.372388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.372397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.372721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.372729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.373024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.373032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.373319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.373327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.373632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.509 [2024-11-05 16:59:51.373847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.509 [2024-11-05 16:59:51.373856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.509 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.374178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.374187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.374505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.374513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.374821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.374829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.375164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.375172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.375363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.375371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.375653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.375661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.375868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.375876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.376208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.376217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.376529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.376812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.376820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.377127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.377135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.377459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.377466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.377775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.377783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.378115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.378123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.378474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.378483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.378808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.378816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.379124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.379132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.379467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.379476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.379853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.379862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.380154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.380162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.380508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.380518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.380809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.380818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.381129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.381137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.381420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.381427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.381738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.381750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.382071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.382079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.382436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.382444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.382738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.382749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.383031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.383038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.510 [2024-11-05 16:59:51.383326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.510 [2024-11-05 16:59:51.383334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.510 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.383539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.383546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.383841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.383849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.384155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.384163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.384451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.384459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.384764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.384772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.385046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.385054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.385317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.385325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.385606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.385615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.385924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.385932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.386254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.386262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.386570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.386577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.386870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.386878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.387185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.387193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.387535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.387544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.387841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.387849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.388196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.388205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.388510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.388518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.388807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.388817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.389177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.389185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.389515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.389523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.389749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.389757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.390128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.390136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.390442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.390451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.390738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.390748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.391098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.391107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.391400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.391408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.391713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.391721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.392039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.392048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.392363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.392371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.392701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.392709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.392981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.392989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.393288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.393296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.393647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.393656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.393874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.393882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.394173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.394181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.511 [2024-11-05 16:59:51.394521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.511 [2024-11-05 16:59:51.394530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.511 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.394866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.394874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.395183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.395339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.395348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.395680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.395687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.395830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.395839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.396154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.396162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.396424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.396432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.396584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.396592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.396865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.396874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.397206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.397214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.397472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.397480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.397757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.397766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.398063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.398071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.398376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.398384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.398677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.398685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.398989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.398997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.399296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.399304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.399593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.399601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.399860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.399869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.400160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.400170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.400473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.400482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.400793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.400802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.401134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.401143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.401430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.401438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.401742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.401753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.402064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.402072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.402368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.402375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.402535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.402544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.402853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.402861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.403153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.403161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.403446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.403453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.403787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.403795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.403971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.403979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.404275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.404283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.404548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.404556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.512 qpair failed and we were unable to recover it. 00:35:44.512 [2024-11-05 16:59:51.404758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.512 [2024-11-05 16:59:51.404766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.405074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.405082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.405371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.405378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.405661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.405670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.405962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.405971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.406278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.406286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.406579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.406587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.406894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.406903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.407208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.407216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.407551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.407559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.407892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.407901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.408173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.408182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.408495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.408503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.408809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.408817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.409008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.409016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.409332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.409341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.409621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.409630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.409960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.409968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.410263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.410270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.410584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.410593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.513 [2024-11-05 16:59:51.410905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.513 [2024-11-05 16:59:51.410914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.513 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.411240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.411248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.411545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.411553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.411830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.411838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.412161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.412168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.412441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.412448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.412751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.412761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.413067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.413075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.413373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.413678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.413687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.413986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.413995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.414203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.414211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.414555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.414564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.414900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.414909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.415228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.415236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.415534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.415863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.415871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.416179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.416188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.416492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.416501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.416809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.416818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.417103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.417111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.417425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.417714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.418009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.418018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.418312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.418321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.514 [2024-11-05 16:59:51.418669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.514 [2024-11-05 16:59:51.418678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.514 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.418952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.418961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.419247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.419569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.419577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.419885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.419893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.420199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.420208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.420515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.420524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.420851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.420860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.421187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.421195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.421513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.421521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.421835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.421844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.422176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.422185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.422479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.422486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.422815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.422824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.423130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.423138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.423428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.423436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.423610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.423619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.423938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.423947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.424260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.424268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.424464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.424472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.424803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.424811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.425133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.425143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.425456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.425464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.425759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.425767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.426052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.426060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.426372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.515 [2024-11-05 16:59:51.426380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.515 qpair failed and we were unable to recover it. 00:35:44.515 [2024-11-05 16:59:51.426691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.426997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.427005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.427277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.427284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.427573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.427581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.427890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.427899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.428240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.428540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.428548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.428851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.428860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.429153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.429161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.429446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.429454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.429766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.429774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.430107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.430116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.430451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.430459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.430791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.430799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.431120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.431128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.431451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.431459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.431612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.431888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.431898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.432195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.432203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.432487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.432494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.432799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.432808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.433125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.433133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.433483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.433491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.433785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.433793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.516 [2024-11-05 16:59:51.434089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.516 [2024-11-05 16:59:51.434097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.516 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.434383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.434391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.434696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.434705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.435030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.435039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.435289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.435297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.435621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.435630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.435947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.435956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.436283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.436291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.436607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.436616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.436917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.436925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.437248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.437256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.437545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.437866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.437874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.438200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.438208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.438543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.438552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.438844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.438852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.439160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.439168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.439462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.439470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.439784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.440106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.440114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.440445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.440454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.440738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.440755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.441051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.441339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.441346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.441522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.441530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.441854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.441863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.442190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.517 [2024-11-05 16:59:51.442198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.517 qpair failed and we were unable to recover it. 00:35:44.517 [2024-11-05 16:59:51.442575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.442584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.442898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.442907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.443204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.443212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.443411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.443419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.443715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.443722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.444028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.444037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.444396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.444405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.444653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.444662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.444781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.444789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.445001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.445010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.445346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.445355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.445535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.445545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.445809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.445817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.446007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.446015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.446333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.446341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.446521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.446530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.446690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.446699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.447028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.447037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.447222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.447230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.447526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.447534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.447815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.447823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.448152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.448161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.448354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.448362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.448617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.448625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.448926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.448936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.449246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.449253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.449562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.449570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.449884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.449893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.518 [2024-11-05 16:59:51.450249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.518 [2024-11-05 16:59:51.450257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.518 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.450549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.450557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.450823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.450831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.451159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.451168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.451332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.451341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.451531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.451539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.451830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.451838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.452148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.452156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.452337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.452345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.452626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.452634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.452815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.452823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.453117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.453125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.453403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.453411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.453588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.453596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.453912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.453921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.454079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.454087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.454371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.454379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.454547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.454556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.454838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.454846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.455160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.455169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.455507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.455516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.455689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.455697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.455982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.455991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.456327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.456336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.456527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.456536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.456681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.456691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.456972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.456981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.457268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.457277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.457617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.457626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.457920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.457928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.458112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.458122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.458418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.458427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.519 qpair failed and we were unable to recover it. 00:35:44.519 [2024-11-05 16:59:51.458734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.519 [2024-11-05 16:59:51.458742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.459034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.459042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.459344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.459352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.459679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.459686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.459947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.459957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.460278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.460286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.460634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.460643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.460949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.460958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.461259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.461269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.461602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.461816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.461824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.462056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.462065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.462400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.462409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.462698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.462707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.462988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.462997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.463297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.463305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.463596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.463605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.463917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.463926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.464230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.464238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.464599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.464608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.464787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.464796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.465107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.465116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.465294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.465607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.465616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.465913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.465921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.466290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.466299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.466602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.466611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.466818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.466826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.467126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.467135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.467431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.467439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.467753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.467762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.468035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.468044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.468232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.468240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.468548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.468556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.469136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.469144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.469440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.469448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.469739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.469750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.520 [2024-11-05 16:59:51.470027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.520 [2024-11-05 16:59:51.470035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.520 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.470345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.470353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.470669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.470677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.470874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.470883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.471226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.471234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.471426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.471435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.471769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.471780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.472080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.472087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.472420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.472429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.472632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.472641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.472917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.472925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.473163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.473171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.473467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.473476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.473707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.473716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.473990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.473999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.474331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.474667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.474676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.474897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.474905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.475138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.475146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.475440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.475448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.475758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.475766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.476060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.476067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.476391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.476399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.476566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.476574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.476876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.476886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.477091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.477099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.477360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.477368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.477675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.477684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.477982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.477990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.478171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.478179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.478330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.478339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.478492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.478500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.478828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.478836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.479075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.479083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.479129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.479137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.479432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.479442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.479751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.479760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.480061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.480069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.480344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.521 [2024-11-05 16:59:51.480352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.521 qpair failed and we were unable to recover it. 00:35:44.521 [2024-11-05 16:59:51.480638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.480646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.480865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.480874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.481044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.481053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.481372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.481380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.481706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.481715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.482014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.482023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.482347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.482356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.482689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.482699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.482874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.482882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.483208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.483217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.483559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.483568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.483934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.483943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.484254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.484262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.484535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.484543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.484757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.484765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.485074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.485082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.485443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.485451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.485637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.485644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.485824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.485831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.486118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.486126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.486502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.486510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.486692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.486700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.486894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.486902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.487233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.487242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.487568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.487575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.487936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.487945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.488268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.488275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.488323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.488330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.488623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.488631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.488822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.489096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.489104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.489288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.489296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.489614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.489622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.489911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.489919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.490230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.490238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.490524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.490532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.490945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.490952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.491207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.522 [2024-11-05 16:59:51.491215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.522 qpair failed and we were unable to recover it. 00:35:44.522 [2024-11-05 16:59:51.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.491552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.491860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.491868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.492070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.492079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.492424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.492432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.492813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.492822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.493096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.493104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.493434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.493443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.493618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.493627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.493936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.493944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.494269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.494279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.494453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.494462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.494617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.494626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.494932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.494941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.495299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.495308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.495605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.495614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.495817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.495825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.496160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.496168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.496503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.496511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.496802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.496811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.497147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.497155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.497483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.497491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.497828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.497836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.498174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.498182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.498509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.498517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.498890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.498899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.499192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.499200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.499397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.499404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.499701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.499709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.500018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.500318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.500326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.500624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.500632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.501009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.501018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.501301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.501309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.501649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.501657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.501993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.502002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.502334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.502342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.502669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.502677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.502988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.523 [2024-11-05 16:59:51.502997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.523 qpair failed and we were unable to recover it. 00:35:44.523 [2024-11-05 16:59:51.503288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.503296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.503620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.503628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.503934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.503944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.504251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.504258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.504426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.504435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.504626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.504634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.504910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.504920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.505219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.505228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.505534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.505543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.505869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.505876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.506205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.506214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.506538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.506548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.506873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.506882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.507183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.507190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.507498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.507506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.507679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.507686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.507932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.507940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.508281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.508289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.508597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.508605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.508921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.508929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.509231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.509239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.509568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.509576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.509935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.509944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.510266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.510274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.510597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.510605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.510899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.510907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.511110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.511117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.511450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.511459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.511777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.511785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.512078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.512085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.512379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.512387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.512694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.512702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.513030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.513038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.513369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.513377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.524 qpair failed and we were unable to recover it. 00:35:44.524 [2024-11-05 16:59:51.513669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.524 [2024-11-05 16:59:51.513677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.513997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.514005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.514299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.514307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.514579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.514588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.514908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.514917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.515220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.515230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.515520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.515529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.515832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.515841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.516172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.516180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.516486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.516494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.516813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.516822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.517146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.517154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.517474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.517482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.517790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.517798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.518095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.518103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.518391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.518399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.518677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.518685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.519032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.519043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.519390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.519721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.519728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.520063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.520070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.520373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.520570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.520577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.520890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.520899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.521215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.521223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.521497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.521504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.521792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.521800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.522105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.522113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.522433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.522440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.522751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.522760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.523033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.523043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.523330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.523338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.523621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.523629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.523919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.523927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.524229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.524237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.524528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.524536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.524858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.524866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.525166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.525175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.525 [2024-11-05 16:59:51.525351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.525 [2024-11-05 16:59:51.525361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.525 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.525547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.525555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.525850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.525858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.526172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.526180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.526503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.526511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.526837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.526845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.527138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.527147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.527481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.527488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.527786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.527794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.528112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.528121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.528408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.528421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.528620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.528628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.528922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.528930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.529234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.529242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.529547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.529555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.529837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.529845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.530137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.530144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.530446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.530454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.530744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.530757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.530947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.530957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.531275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.531283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.531587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.531594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.531956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.531964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.532270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.532279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.532663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.532671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.532957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.532965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.533252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.533258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.533566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.533574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.533883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.533891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.534179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.534187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.534518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.534525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.534835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.534843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.535179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.535187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.535472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.535480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.526 qpair failed and we were unable to recover it. 00:35:44.526 [2024-11-05 16:59:51.535706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.526 [2024-11-05 16:59:51.535714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.535983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.535991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.536313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.536320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.536636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.536645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.536831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.536839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.537095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.537402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.537411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.537709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.537717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.538036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.538044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.538217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.538225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.538482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.538491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.538820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.538830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.539163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.539175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.539484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.539492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.539773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.539782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.540081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.540089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.540373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.540381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.540686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.540695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.541012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.541021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.541326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.541334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.541669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.541678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.541997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.542006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.542292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.542301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.542664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.542673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.543016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.543026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.543240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.543250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.543579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.543588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.543882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.543890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.544231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.544240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.544416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.544702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.544710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.527 [2024-11-05 16:59:51.545023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.527 [2024-11-05 16:59:51.545032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.527 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.545321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.545635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.545645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.545844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.545853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.546164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.546173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.546501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.546509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.546811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.546820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.547076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.547085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.547405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.547414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.547742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.547761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.548030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.548039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.548325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.548334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.548634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.548643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.548967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.548976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.802 [2024-11-05 16:59:51.549356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.802 [2024-11-05 16:59:51.549364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.802 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.549523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.549532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.549857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.549866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.550035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.550044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.550348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.550357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.550695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.551013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.551023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.551344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.551355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.551671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.551680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.551967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.551976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.552287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.552296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.552619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.552628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.552934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.552943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.553268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.553277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.553583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.553592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.553902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.553911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.554218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.554227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.554521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.554529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.554808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.554816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.555101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.555109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.555408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.555416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.555703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.555712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.556019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.556028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.556390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.556397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.556706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.556714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.557118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.557126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.557498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.557506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.557786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.557795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.558136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.558144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.558320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.558329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.558637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.558646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.558928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.558937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.559244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.559252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.559616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.559925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.559935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.560288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.560296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.560603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.560900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.560909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.803 [2024-11-05 16:59:51.561229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.803 [2024-11-05 16:59:51.561237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.803 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.561598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.561607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.561900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.561908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.562194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.562538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.562547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.562831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.562840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.563148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.563156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.563481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.563489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.563796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.563804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.564097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.564106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.564402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.564731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.564740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.565026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.565034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.565327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.565335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.565640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.565648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.565810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.565820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.566100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.566109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.566395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.566404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.566598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.566607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.566912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.566921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.567226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.567234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.567532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.567541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.567906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.567916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.568208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.568217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.568494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.568502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.568789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.568797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.569114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.569123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.569421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.569428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.569732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.569740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.570046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.570054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.570230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.570238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.570540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.570549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.570853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.570862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.571148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.571156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.571505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.571514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.571803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.571811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.572101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.572109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.572472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.572481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.804 [2024-11-05 16:59:51.572775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.804 [2024-11-05 16:59:51.572783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.804 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.573071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.573088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.573393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.573401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.573725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.573734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.574033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.574042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.574341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.574350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.574686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.574695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.574995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.575003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.575329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.575336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.575672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.575681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.575986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.575995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.576281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.576290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.576564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.576573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.576866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.576875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.577149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.577157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.577464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.577471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.577793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.577803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.578158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.578166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.578475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.578483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.578788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.579090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.579099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.579385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.579393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.579626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.579634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.579957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.579965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.580226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.580234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.580527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.580535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.580853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.580862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.581204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.581508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.581517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.581675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.581684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.582012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.582313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.582321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.582663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.582672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.582920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.582929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.583235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.583244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.583576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.583586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.583891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.583899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.584196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.584203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.805 [2024-11-05 16:59:51.584509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.805 [2024-11-05 16:59:51.584516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.805 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.584854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.585157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.585166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.585356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.585363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.585688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.585696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.585896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.585904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.586216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.586225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.586521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.586529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.586853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.586861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.587193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.587201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.587508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.587517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.587812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.587820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.588133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.588141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.588447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.588457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.588761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.588769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.588945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.588953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.589219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.589228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.589547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.589555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.589838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.589847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.590171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.590179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.590523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.590532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.590816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.590824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.591126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.591134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.591478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.591486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.591795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.591803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.592101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.592109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.592494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.592504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.592809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.592817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.593130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.593139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.593429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.593437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.593719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.594063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.594358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.594368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.594672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.594680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.594990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.594999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.595308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.595316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.595619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.595627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.595899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.595908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.806 [2024-11-05 16:59:51.596218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.806 [2024-11-05 16:59:51.596227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.806 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.596413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.596422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.596728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.596737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.597060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.597068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.597349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.597357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.597680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.597687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.597978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.597987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.598274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.598283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.598588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.598596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.598885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.598894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.599200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.599208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.599519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.599526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.599710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.599717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.600015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.600024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.600327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.600336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.600660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.600671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.600887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.600895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.601228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.601235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.601511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.601520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.601850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.601859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.602177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.602187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.602476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.602485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.602788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.602797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.603094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.603102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.603372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.603379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.603737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.604061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.604069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.604367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.604375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.604682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.604690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.605013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.605021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.605334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.605341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.605666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.605673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.605985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.605994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.807 [2024-11-05 16:59:51.606281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.807 [2024-11-05 16:59:51.606289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.807 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.606604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.606613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.606906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.606914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.607184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.607192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.607479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.607487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.607791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.607799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.608122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.608130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.608438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.608447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.608838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.608847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.609055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.609063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.609373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.609381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.609762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.609770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.609980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.609988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.610291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.610299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.610488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.610496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.610870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.610878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.611085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.611093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.611413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.611421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.611730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.611739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.612015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.612023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.612303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.612311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.612610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.612618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.612926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.612936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.613203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.613211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.613526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.613534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.613816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.613824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.614135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.614143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.614447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.614456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.614767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.614776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.614969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.614977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.615246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.615253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.615560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.615569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.615863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.615871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.616176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.616184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.616477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.616484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.616784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.616792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.617137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.617146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.617338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.617347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.617661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.808 [2024-11-05 16:59:51.617669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.808 qpair failed and we were unable to recover it. 00:35:44.808 [2024-11-05 16:59:51.617973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.617981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.618304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.618311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.618625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.618634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.618960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.618968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.619235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.619243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.619621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.619629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.619859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.619868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.620135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.620143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.620447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.620456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.620642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.620650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.620950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.620960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.621151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.621161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.621417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.621426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.621615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.621624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.621948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.621956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.622276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.622284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.622574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.622582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.622937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.622946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.623148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.623156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.623436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.623444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.623752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.623761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.624054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.624062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.624421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.624429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.624710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.624719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.625076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.625084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.625377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.625385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.625686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.625694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.626023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.626331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.626339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.626640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.626968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.626977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.627288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.627296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.627598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.627771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.627779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.628093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.628101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.628397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.628405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.628711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.628719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.629042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.629051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.809 qpair failed and we were unable to recover it. 00:35:44.809 [2024-11-05 16:59:51.629361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.809 [2024-11-05 16:59:51.629371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.629656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.629665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.629761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.629769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.630022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.630030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.630227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.630236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.630547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.630556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.630853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.630861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.631161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.631169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.631454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.631462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.631622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.631632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.631925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.631934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.632210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.632219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.632517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.632525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.632797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.632806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.633116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.633124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.633431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.633440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.633728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.633737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.634087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.634422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.634431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.634736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.634749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.635135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.635143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.635448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.635456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.635761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.635770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.636082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.636090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.636379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.636387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.636692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.636702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.637022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.637031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.810 [2024-11-05 16:59:51.637336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.810 [2024-11-05 16:59:51.637344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.810 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.637629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.637637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.637915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.638219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.638227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.638535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.638543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.638815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.638823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.639131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.639139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.639508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.639517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.639814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.639823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.640123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.640131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.640435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.640442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.640732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.640740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.641027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.641035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.641326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.641334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.641640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.641648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.641923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.641932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.642250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.642257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.642605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.642613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.642926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.642934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.643227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.643235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.643575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.643584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.643869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.643877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.644227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.644234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.644516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.644523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.644835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.644844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.645140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.811 [2024-11-05 16:59:51.645148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.811 qpair failed and we were unable to recover it. 00:35:44.811 [2024-11-05 16:59:51.645447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.645455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.645751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.645759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.645914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.645922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.646229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.646238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.646543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.646552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.646824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.646832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.647149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.647157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.647480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.647489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.647797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.647806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3368133 Killed "${NVMF_APP[@]}" "$@" 00:35:44.812 [2024-11-05 16:59:51.648134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.648145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.648487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.648496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 [2024-11-05 16:59:51.648779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.648789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:44.812 [2024-11-05 16:59:51.649122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.649132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.812 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:44.812 [2024-11-05 16:59:51.649457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.812 [2024-11-05 16:59:51.649467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.812 qpair failed and we were unable to recover it. 00:35:44.813 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:44.813 [2024-11-05 16:59:51.649830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.649840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.813 [2024-11-05 16:59:51.650161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.650171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.650476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.650484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.650772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.650780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.651104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.651112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.651436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.651445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.651757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.651766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.652090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.652098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.652403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.652411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.652696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.652704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.653016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.653025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.653307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.653315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.653622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.653630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.653953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.653962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.654271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.654279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.654605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.654615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.654920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.654929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.655268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.655278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.655580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.655896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.655906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.656212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.656221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.656526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.656534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.656845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.813 [2024-11-05 16:59:51.656856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.813 qpair failed and we were unable to recover it. 00:35:44.813 [2024-11-05 16:59:51.657169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.657177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.657487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.657496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=3369161 00:35:44.814 [2024-11-05 16:59:51.657796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.657806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 3369161 00:35:44.814 [2024-11-05 16:59:51.658128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.658138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3369161 ']' 00:35:44.814 [2024-11-05 16:59:51.658441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.658452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:44.814 [2024-11-05 16:59:51.658755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.658787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.814 [2024-11-05 16:59:51.659088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.659099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:44.814 [2024-11-05 16:59:51.659404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.659415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 16:59:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:44.814 [2024-11-05 16:59:51.659620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.659632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.659959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.659970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.660247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.660257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.660582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.660591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.660890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.660900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.661224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.661234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.661548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.661559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.661865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.661874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.662187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.662197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.662492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.662502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.662812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.662822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.663182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.663191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.663487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.663496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.814 qpair failed and we were unable to recover it. 00:35:44.814 [2024-11-05 16:59:51.663807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.814 [2024-11-05 16:59:51.663820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.664103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.664113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.664317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.664326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.664643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.664652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.664949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.664959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.665250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.665259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.665566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.665575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.665762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.665772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.666055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.666064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.666337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.666347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.666682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.666692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.666988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.666997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.667185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.667194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.667396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.667406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.667588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.667598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.667900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.667909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.668240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.668425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.668433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.668740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.668761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.669068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.669077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.669378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.669387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.669543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.669553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.669862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.669870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.670195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.670203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.670473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.670481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.670773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.815 [2024-11-05 16:59:51.670781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.815 qpair failed and we were unable to recover it. 00:35:44.815 [2024-11-05 16:59:51.671134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.671143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.671458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.671467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.671804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.671812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.672114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.672122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.672419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.672428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.672724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.672732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.673040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.673049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.673331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.673339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.673668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.673676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.674015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.674025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.674317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.674325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.674529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.674537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.674852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.674861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.675168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.675176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.675331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.675341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.675630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.675639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.675924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.675932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.676230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.676239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.676547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.676554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.676891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.676900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.677209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.677218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.677544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.677552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.677831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.677839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.678141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.678149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.678321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.678329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.678659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.678668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.678960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.678969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.679146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.679154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.679453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.679462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.679833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.679841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.680120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.680128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.680425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.680433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.816 qpair failed and we were unable to recover it. 00:35:44.816 [2024-11-05 16:59:51.680736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.816 [2024-11-05 16:59:51.680744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.681045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.681054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.681360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.681369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.681702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.681710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.681990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.681999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.682185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.682194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.682454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.682463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.682773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.682782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.683116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.683125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.683415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.683423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.683719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.683728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.684024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.684033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.684255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.684263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.684556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.684564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.684854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.684863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.685203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.685211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.685540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.685549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.685737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.685750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.686028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.686036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.686322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.686330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.686643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.686651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.686956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.686964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.687294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.687306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.687688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.687697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.688045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.688054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.688344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.688352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.688626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.688635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.688825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.688833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.689195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.689204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.689530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.689540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.689842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.689850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.690180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.690189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.690489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.690498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.690867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.690875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.691130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.691138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.691420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.691428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.691731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.691740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.817 [2024-11-05 16:59:51.691986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.817 [2024-11-05 16:59:51.691995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.817 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.692293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.692301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.692595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.692603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.692912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.692920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.693226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.693235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.693541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.693550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.693875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.693883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.694192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.694200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.694522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.694530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.694808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.694817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.695054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.695063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.695373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.695381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.695575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.695583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.695879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.695889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.696216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.696225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.696616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.696625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.696979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.696989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.697262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.697271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.697504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.697512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.697812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.697821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.698191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.698200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.698459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.698467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.698760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.698769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.699020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.699028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.699352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.699361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.699669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.699680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.699989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.699999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.700311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.700319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.700638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.700646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.700956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.700965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.701323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.701332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.701638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.701647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.701937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.701945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.702219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.818 [2024-11-05 16:59:51.702226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.818 qpair failed and we were unable to recover it. 00:35:44.818 [2024-11-05 16:59:51.702552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.702561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.702869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.702877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.703169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.703176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.703495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.703504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.703828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.703836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.704144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.704153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.704440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.704449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.704756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.704765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.705029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.705037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.705237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.705245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.705535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.705543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.705704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.705711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.706036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.706044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.706409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.706417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.706776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.706785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.707109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.707117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.707444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.707453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.707660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.707669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.707838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.707847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.708125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.708134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.708422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.708430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.708723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.708732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.708789] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:35:44.819 [2024-11-05 16:59:51.708841] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.819 [2024-11-05 16:59:51.709026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.709035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.709339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.709346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.709681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.709689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.710047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.710057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.710403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.710412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.710666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.710675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.710998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.711007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.711279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.711288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.711580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.711590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.711856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.711866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.712068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.712077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.712365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.712374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.712689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.712699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.713008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.713018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.713341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.713349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.819 qpair failed and we were unable to recover it. 00:35:44.819 [2024-11-05 16:59:51.713657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.819 [2024-11-05 16:59:51.713665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.713828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.713837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.714182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.714191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.714482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.714491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.714876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.714885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.715204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.715213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.715505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.715514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.715588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.715596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.715736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.715753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.715941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.715951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.716276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.716285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.716608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.716617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.716905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.716914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.717097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.717106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.717393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.717402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.717702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.717711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.717881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.717891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.718198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.718208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.718395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.718665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.718940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.718949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.719253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.719262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.719594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.719603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.719895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.719905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.720213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.720222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.720511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.720521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.720820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.720829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.721150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.721159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.721469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.721478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.721792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.721802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.722114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.722486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.722495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.722787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.722797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.723111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.723122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.723422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.723431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.723725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.723734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.724062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.724072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.724378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.724388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.820 [2024-11-05 16:59:51.724692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.820 [2024-11-05 16:59:51.724701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.820 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.724977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.724987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.725277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.725286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.725593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.725602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.725893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.725902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.726242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.726252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.726563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.726572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.726826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.726835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.727168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.727177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.727518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.727527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.727822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.727831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.728159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.728167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.728496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.728505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.728812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.728821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.729110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.729120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.729338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.729347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.729708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.729717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.729893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.729902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.730181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.730190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.730524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.730533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.730837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.730847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.731164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.731174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.731511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.731777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.731787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.732058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.732066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.732411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.732421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.732727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.732735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.732922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.732930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.733313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.733321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.733631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.733934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.733943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.734277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.734286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.734610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.734619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.734962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.734971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.735273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.735281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.735626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.735636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.735949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.735958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.736159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.736166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.736495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.821 [2024-11-05 16:59:51.736503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.821 qpair failed and we were unable to recover it. 00:35:44.821 [2024-11-05 16:59:51.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.736699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.737023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.737032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.737338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.737347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.737673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.737682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.738022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.738031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.738328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.738338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.738647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.738656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.739007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.739016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.739320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.739329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.739619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.739627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.739817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.739825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.740149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.740157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.740479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.740488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.740654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.740662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.740946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.740955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.741279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.741287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.741600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.741609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.741937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.741945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.742235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.742242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.742533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.742541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.742732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.742739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.743046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.743055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.743360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.743368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.743661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.743668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.743963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.743972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.744261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.744268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.744542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.744549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.744874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.744883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.745187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.745196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.745383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.745393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.745702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.745710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.746020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.746029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.746333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.746341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.746628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.746636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.746968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.746976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.747264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.747271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.747588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.747597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.747925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.822 [2024-11-05 16:59:51.747934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.822 qpair failed and we were unable to recover it. 00:35:44.822 [2024-11-05 16:59:51.748223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.748231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.748519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.748527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.748810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.748818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.749138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.749147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.749452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.749461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.749751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.749760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.750050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.750058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.750342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.750351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.750683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.750691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.750997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.751006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.751319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.751327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.751652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.751661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.751976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.751984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.752317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.752624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.752632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.752965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.752973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.753307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.753315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.753601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.753609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.753913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.753921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.754223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.754232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.754542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.754549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.754866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.754874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.755208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.755216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.755543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.755552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.755872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.755880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.756368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.756383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.756700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.756709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.757014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.757023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.757342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.757351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.757578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.757587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.757895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.757904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.758085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.758093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.758410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.758418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.758710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.823 [2024-11-05 16:59:51.758719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.823 qpair failed and we were unable to recover it. 00:35:44.823 [2024-11-05 16:59:51.759043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.759051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.759241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.759249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.759554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.759562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.759820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.759829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.760130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.760141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.760423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.760431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.760738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.761041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.761049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.761356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.761364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.761691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.761699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.762000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.762010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.762295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.762303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.762607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.762615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.762942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.762950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.763136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.763143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.763468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.763477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.763787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.763796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.764096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.764104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.764406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.764415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.764709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.764718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.765045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.765054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.765344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.765353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.765684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.765692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.766009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.766018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.766309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.766317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.766665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.766673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.766981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.766989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.767288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.767296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.767602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.767611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.767804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.767812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.768116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.768124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.768417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.768426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.768728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.768738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.769044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.769053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.769339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.769347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.769637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.769645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.769932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.769940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.770249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.770257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.824 [2024-11-05 16:59:51.770565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.824 [2024-11-05 16:59:51.770574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.824 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.770874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.770883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.771187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.771195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.771521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.771530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.771857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.771865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.772159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.772168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.772478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.772488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.772819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.772829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.773209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.773217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.773498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.773506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.773814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.773823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.774148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.774156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.774461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.774468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.774660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.774669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.774952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.774961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.775268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.775276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.775547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.775555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.775869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.775877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.776182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.776190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.776492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.776500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.776810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.776818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.777123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.777131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.777436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.777445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.777727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.777735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.777942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.777950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.778259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.778268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.778568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.778578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.778867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.778875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.779209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.779217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.779542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.779550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.779854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.779862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.780166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.780175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.780485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.780493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.780802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.780811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.781088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.781096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.781387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.781396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.781700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.781707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.782036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.782045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.825 [2024-11-05 16:59:51.782347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.825 [2024-11-05 16:59:51.782356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.825 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.782671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.782679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.782893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.782902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.783222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.783230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.783552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.783560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.783850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.783858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.784177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.784186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.784487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.784496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.784787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.784801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.785105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.785113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.785416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.785425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.785713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.785721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.786028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.786037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.786325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.786333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.786639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.786648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.786972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.786981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.787285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.787294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.787583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.787591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.787887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.787896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.788212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.788221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.788526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.788535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.788861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.788869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.789115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.789123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.789451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.789459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.789787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.789796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.790089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.790098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.790377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.790385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.790688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.790696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.790989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.790997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.791280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.791288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.791480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.791488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.791797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.791805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.792119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.792127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.792452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.792461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.792769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.792779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.793121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.793130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.793438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.793446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.793757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.793766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.794081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.826 [2024-11-05 16:59:51.794090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.826 qpair failed and we were unable to recover it. 00:35:44.826 [2024-11-05 16:59:51.794393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.794401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.794712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.794721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.795051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.795059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.795356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.795365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.795518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.795527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.795848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.795858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.796166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.796174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.796485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.796493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.796820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.796829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.797216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.797226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.797518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.797526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.797823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.797831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.798118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.798127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.798438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.798446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.798772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.798781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.799090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.799097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.799388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.799397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.799697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.799706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.800014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.800023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.800331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.800340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.800631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.800640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.800980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.800989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.801354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.801363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.801578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.801587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.801918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.801927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.802269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.802278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.802553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.802561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.802855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.802864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.803204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.803213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.803409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.803418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.803725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.803733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.804045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.804053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.804348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.804357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.804672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.827 [2024-11-05 16:59:51.804680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.827 qpair failed and we were unable to recover it. 00:35:44.827 [2024-11-05 16:59:51.805015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.805024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.805330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.805621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.805910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.805918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.806224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.806234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.806420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.806428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.806664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.806672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.806971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.806979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.807289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.807298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.807605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.807614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.807908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.807916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.808232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.808241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.808568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.808577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.808893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.808902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.809200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.809208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.809515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.809525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.809716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.809725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.810020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.810029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.810356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.810364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.810545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.810554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.810828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.810838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 [2024-11-05 16:59:51.810831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.811175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.811183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.811472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.811480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.811758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.811766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.811950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.811958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.812269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.812278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.812602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.812611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.812785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.813076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.813084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.813233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.813241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.813604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.813612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.813917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.813925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.814259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.814267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.814445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.814453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.814638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.814646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.814845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.814853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.815032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.815040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.815344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.828 [2024-11-05 16:59:51.815352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.828 qpair failed and we were unable to recover it. 00:35:44.828 [2024-11-05 16:59:51.815654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.815662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.815965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.815974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.816301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.816309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.816619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.816627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.817026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.817034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.817333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.817342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.817542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.817551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.817818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.817827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.818006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.818014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.818320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.818329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.818636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.818644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.818839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.818846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.819178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.819186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.819465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.819475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.819794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.819803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.820106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.820115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.820281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.820290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.820446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.820457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.820822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.820831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.821129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.821137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.821446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.821454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.821759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.821768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.821923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.821930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.822199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.822207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.822498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.822507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.822818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.822827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.823151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.823159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.823427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.823435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.823748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.823756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.824051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.824060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.824245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.824253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.824447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.824455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.824779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.824787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.825094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.825103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.825398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.825406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.825606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.825614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.825921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.825929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.826205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.826213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.829 [2024-11-05 16:59:51.826539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.829 [2024-11-05 16:59:51.826548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.829 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.826862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.826872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.827210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.827218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.827521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.827529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.827816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.827825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.827986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.827993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.828306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.828315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.828623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.828631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.828917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.828925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.829260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.829269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.829559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.829875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.829891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.830186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.830195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.830506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.830514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.830815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.830824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.830975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.830983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.831265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.831274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.831469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.831478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.831795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.831803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.832098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.832107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.832427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.832435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.832759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.832768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.833080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.833088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.833413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.833422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.833588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.833597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.833887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.833896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.834206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.834214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.834514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.834522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.834809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.835152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.835161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.835499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.835508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.835775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.835784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.836101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.836110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.836436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.836445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.836756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.836766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.837103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.837112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.837416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.837424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.837722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.837730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.830 [2024-11-05 16:59:51.838020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.830 [2024-11-05 16:59:51.838028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.830 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.838319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.838327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.838634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.838643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.838975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.838984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.839300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.839587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.839595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.839899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.839909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.840228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.840235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.840550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.840559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.840849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.840860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.841166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.841175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.841511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.841520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.841831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.841840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.842144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.842161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.842454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.842462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.842752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.842761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.843074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.843083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.843371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.843379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.843711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.843720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.844024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.844032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.844390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.844398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.844755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.844767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.845045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.845353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.845671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.845681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.845992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.846002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.846309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.846318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.846460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.831 [2024-11-05 16:59:51.846489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.831 [2024-11-05 16:59:51.846498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.831 [2024-11-05 16:59:51.846505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.831 [2024-11-05 16:59:51.846511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.831 [2024-11-05 16:59:51.846608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.846616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.846777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.846785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.847051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.847061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.847383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.847392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.847680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.847689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.847970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.847979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.847978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:44.831 [2024-11-05 16:59:51.848141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:44.831 [2024-11-05 16:59:51.848251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.848261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.848342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:44.831 [2024-11-05 16:59:51.848342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:44.831 [2024-11-05 16:59:51.848568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.848577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.848685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.831 [2024-11-05 16:59:51.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.831 qpair failed and we were unable to recover it. 00:35:44.831 [2024-11-05 16:59:51.848978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.848988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.849383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.849392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.849693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.849702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.850000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.850009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.850200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.850210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.850406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.850415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.850752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.850762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.850959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.850968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.851286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.851606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.851614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.851802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.851810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:44.832 [2024-11-05 16:59:51.852138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.832 [2024-11-05 16:59:51.852148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:44.832 qpair failed and we were unable to recover it. 00:35:45.105 [2024-11-05 16:59:51.852458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.105 [2024-11-05 16:59:51.852468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.105 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.852775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.852784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.853117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.853126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.853418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.853428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.853740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.853753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.854039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.854048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.854236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.854244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.854552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.854561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.854753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.854762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.855073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.855082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.855393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.855402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.855711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.855721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.856034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.856043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.856349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.856359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.856669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.856678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.856984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.856993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.857379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.857389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.857700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.858018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.858027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.858210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.858219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.858535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.858544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.858731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.858740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.859055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.859064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.859418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.859742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.859755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.859925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.859934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.860261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.860269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.860457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.860466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.860773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.860782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.861096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.861105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.861383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.861392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.861684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.861693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.861889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.861897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.862195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.862214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.862368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.862377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.862715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.862723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.862920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.862929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.863200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.863208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.106 [2024-11-05 16:59:51.863519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.106 [2024-11-05 16:59:51.863529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.106 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.863815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.863824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.863878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.863886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.864197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.864205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.864531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.864539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.864892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.864902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.865170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.865178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.865504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.865513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.865822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.865831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.866140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.866149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.866443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.866452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.866646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.866654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.867022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.867031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.867247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.867255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.867580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.867589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.867890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.867899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.868220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.868229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.868392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.868401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.868575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.868585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.868759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.868768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.869064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.869072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.869252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.869261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.869549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.869558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.869865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.869874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.870035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.870045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.870246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.870257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.870439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.870448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.870637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.870646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.870841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.870849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.871040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.871048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.871365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.871375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.871688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.871697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.872049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.872059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.872363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.872372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.872676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.872685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.873002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.107 [2024-11-05 16:59:51.873012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.107 qpair failed and we were unable to recover it. 00:35:45.107 [2024-11-05 16:59:51.873319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.873328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.873520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.873529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.873698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.873707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.873986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.873997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.874320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.874331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.874659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.874668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.875063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.875073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.875241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.875250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.875536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.875545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.875719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.875729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.876036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.876044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.876351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.876360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.876536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.876545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.876877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.876885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.877159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.877167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.877339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.877347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.877531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.877540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.877723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.877731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.878052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.878061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.878374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.878382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.878695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.878703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.879062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.879071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.879368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.879376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.879684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.879693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.880073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.880082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.880380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.880389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.880665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.880674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.881018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.881028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.881244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.881253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.881537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.881547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.881879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.881888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.882200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.882218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.882598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.882607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.882917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.882926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.883226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.883234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.883552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.883562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.883877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.883886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.884187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.884196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.108 qpair failed and we were unable to recover it. 00:35:45.108 [2024-11-05 16:59:51.884526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.108 [2024-11-05 16:59:51.884534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.884843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.884852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.885196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.885204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.885501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.885509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.885797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.885806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.886140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.886149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.886457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.886466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.886799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.886809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.887000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.887008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.887323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.887332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.887637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.887646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.887928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.887938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.888325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.888334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.888638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.888648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.888842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.888852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.889167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.889176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.889470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.889479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.889789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.889799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.890013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.890023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.890422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.890431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.890731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.890740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.890931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.890940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.891151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.891160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.891361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.891369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.891551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.891559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.891879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.891888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.892264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.892272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.892426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.892433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.892598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.892606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.892828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.892837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.893025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.893033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.893355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.893363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.893545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.893553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.893871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.893880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.894195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.894204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.894384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.894392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.894680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.894688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.894732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.109 [2024-11-05 16:59:51.894739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.109 qpair failed and we were unable to recover it. 00:35:45.109 [2024-11-05 16:59:51.895046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.895055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.895105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.895112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.895279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.895286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.895450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.895458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.895734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.895742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.895908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.895915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.896228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.896236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.896395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.896403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.896671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.896681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.896876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.896884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.897159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.897167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.897506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.897514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.897695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.897703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.898046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.898054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.898363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.898372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.898681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.898689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.899013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.899177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.899194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.899512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.899520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.899819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.899827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.900068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.900080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.900271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.900279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.900467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.900474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.900660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.900668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.900707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.900713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.901033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.901041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.901352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.901360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.901690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.902009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.902017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.902336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.902344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.902530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.902539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.902819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.902827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.903023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.903031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.110 [2024-11-05 16:59:51.903160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.110 [2024-11-05 16:59:51.903168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.110 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.903320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.903328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.903659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.903667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.903944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.903952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.904113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.904122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.904449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.904457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.904642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.904650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.904813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.904822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.904873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.905192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.905201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.905248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.905254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.905514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.905522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.905876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.905884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.906063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.906071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.906112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.906119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.906290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.906297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.906477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.906486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.906778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.906787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.907117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.907126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.907493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.907502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.907691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.907699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.908025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.908033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.908240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.908248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.908440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.908448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.908744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.908756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.909070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.909079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.909408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.909416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.909739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.909754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.909952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.909959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.910270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.910278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.910565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.910572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.910889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.910897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.911227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.911236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.911626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.911635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.911957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.911966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.912285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.912293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.912635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.912644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.912984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.111 [2024-11-05 16:59:51.912992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.111 qpair failed and we were unable to recover it. 00:35:45.111 [2024-11-05 16:59:51.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.913294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.913566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.913575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.913925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.913934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.914237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.914247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.914424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.914432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.914756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.914765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.915108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.915431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.915698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.915890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.915900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.916241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.916249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.916463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.916471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.916801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.916811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.917007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.917016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.917369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.917379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.917556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.917565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.917756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.917765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.918047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.918056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.918388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.918396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.918735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.918744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.918916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.918924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.919209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.919217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.919528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.919536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.919867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.919876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.920070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.920078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.920377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.920386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.920693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.920702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.921028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.921037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.921315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.921324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.921514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.921524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.921808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.921817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.922024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.922032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.922164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.922172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.922341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.922349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.922524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.922533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.922839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.922848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.923195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.923204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.923515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.923523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.112 [2024-11-05 16:59:51.923814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.112 [2024-11-05 16:59:51.923822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.112 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.924155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.924163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.924428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.924436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.924772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.924780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.925105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.925113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.925314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.925633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.925641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.925946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.925955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.926282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.926290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.926502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.926510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.926835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.926843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.927043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.927051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.927366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.927373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.927694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.927702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.927982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.927990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.928304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.928313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.928492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.928501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.928660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.928670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.929036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.929046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.929370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.929698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.929706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.929921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.929929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.930253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.930262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.930576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.930585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.930919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.930927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.931265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.931450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.931459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.931778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.931786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.932101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.932109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.932286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.932294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.932570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.932579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.932889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.932899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.933220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.933229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.933445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.933454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.933609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.933617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.934123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.934131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.934446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.934454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.934791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.113 [2024-11-05 16:59:51.934799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.113 qpair failed and we were unable to recover it. 00:35:45.113 [2024-11-05 16:59:51.935069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.935077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.935390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.935398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.935740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.935752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.935931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.935940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.936124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.936132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.936442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.936451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.936784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.936792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.937107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.937115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.937423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.937432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.937724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.937732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.938035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.938044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.938376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.938385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.938686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.938694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.939054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.939062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.939383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.939391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.939688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.939696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.939873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.939880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.940213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.940221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.940554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.940563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.940893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.940901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.941225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.941234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.941556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.941564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.941869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.941876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.942210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.942217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.942534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.942543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.942859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.942868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.943047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.943055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.943353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.943361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.943517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.943526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.943738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.943750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.944052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.944061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.944338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.944346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.944656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.944665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.944990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.944998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.945333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.945341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.945661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.945670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.945978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.945986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.114 [2024-11-05 16:59:51.946139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.114 [2024-11-05 16:59:51.946146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.114 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.946454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.946463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.946788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.946796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.947112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.947120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.947457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.947466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.947773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.947781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.948106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.948114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.948433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.948442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.948772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.949092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.949100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.949401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.949409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.949680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.949688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.949874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.949882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.950194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.950202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.950541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.950549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.950859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.950867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.951014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.951022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.951437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.951446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.951740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.951752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.952073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.952081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.952422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.952430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.952767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.952775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.953208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.953215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.953523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.953532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.953836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.953844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.954027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.954034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.954347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.954356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.954526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.954533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.954710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.954719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.955014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.955023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.955314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.955323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.955633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.955641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.955932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.955940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.956110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.956118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.956316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.115 [2024-11-05 16:59:51.956324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.115 qpair failed and we were unable to recover it. 00:35:45.115 [2024-11-05 16:59:51.956636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.956648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.956940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.956948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.957123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.957131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.957446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.957454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.957763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.957771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.957956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.957964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.958295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.958304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.958580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.958588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.958900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.958908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.959202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.959210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.959515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.959524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.959823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.959831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.960221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.960229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.960409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.960417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.960695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.960703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.960924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.961133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.961142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.961307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.961316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.961631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.961640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.961798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.961806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.962103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.962111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.962519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.962527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.962916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.962924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.963290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.963299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.963475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.963484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.963819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.964133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.964141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.964469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.964477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.964804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.964812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.965112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.965120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.965439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.965448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.965755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.965764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.966047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.966056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.966368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.966377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.966674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.966681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.967012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.967021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.967193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.967202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.967464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.967472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.116 [2024-11-05 16:59:51.967628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.116 [2024-11-05 16:59:51.967636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.116 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.967828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.967836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.968015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.968269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.968277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.968468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.968475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.968810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.968818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.969160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.969168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.969493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.969501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.969838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.969847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.970183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.970191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.970376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.970384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.970554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.970562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.970856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.970864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.971200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.971208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.971523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.971531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.971847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.971856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.972193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.972201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.972378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.972386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.972770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.972779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.973064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.973072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.973264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.973273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.973605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.973916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.973924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.974087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.974095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.974390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.974398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.974711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.974719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.974905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.974945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.974952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.975136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.975144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.975316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.975323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.975618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.975627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.975858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.975867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.976060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.976068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.976277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.976285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.976323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.976330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.976487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.976495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.976679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.976689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.976988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.976997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.977164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.117 [2024-11-05 16:59:51.977172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.117 qpair failed and we were unable to recover it. 00:35:45.117 [2024-11-05 16:59:51.977326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.977334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.977487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.977495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.977660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.977669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.977846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.977857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.978093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.978101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.978397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.978405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.978475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.978482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.978750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.978758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.978948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.979118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.979126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.979425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.979433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.979821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.979830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.980095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.980103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.980418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.980426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.980716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.980724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.981157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.981165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.981354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.981362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.981659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.981666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.981979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.981988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.982151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.982159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.982343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.982351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.982532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.982540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.982698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.982706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.983950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.984285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.984293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.984459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.984466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.984650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.984658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.984703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.984711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.984902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.984910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.985134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.985143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.985360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.985367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.985559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.985567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.118 qpair failed and we were unable to recover it. 00:35:45.118 [2024-11-05 16:59:51.985769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.118 [2024-11-05 16:59:51.985778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.986086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.986094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.986277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.986285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.986595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.986604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.986788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.986797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.986982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.986992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.987306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.987315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.987635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.987643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.987839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.987847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.988194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.988201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.988500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.988508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.988781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.988790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.989167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.989175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.989464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.989473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.989627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.989819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.989827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.990143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.990152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.990497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.990505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.990833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.990841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.991200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.991208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.991509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.991517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.991698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.991706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.992001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.992009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.992329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.992337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.992649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.992658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.992960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.992969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.993261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.993270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.993575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.993584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.993945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.993953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.994243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.994252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.994403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.994412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.994762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.994770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.995080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.995089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.995429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.995437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.995727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.995736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.119 qpair failed and we were unable to recover it. 00:35:45.119 [2024-11-05 16:59:51.996059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.119 [2024-11-05 16:59:51.996067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.996382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.996390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.996560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.996568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.996872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.996880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.997208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.997216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.997549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.997557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.997841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.997851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.998181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.998189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.998497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.998506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.998848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.998857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.999194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.999203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.999491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.999500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:51.999818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:51.999826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.000164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.000172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.000342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.000350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.000653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.000662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.000956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.000965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.001277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.001285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.001620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.001629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.001943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.001952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.002131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.002139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.002452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.002461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.002754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.002762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.002930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.002938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.003272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.003280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.003588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.003596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.003912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.003921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.004223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.004231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.004547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.004555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.004870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.004878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.005161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.005169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.005518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.005526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.005827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.005837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.005998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.006007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.006156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.006477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.006487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.006829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.006837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.007239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.007247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.120 [2024-11-05 16:59:52.007583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.120 [2024-11-05 16:59:52.007591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.120 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.007787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.007794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.008119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.008127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.008441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.008449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.008786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.008796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.009103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.009111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.009422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.009430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.009738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.009749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.010047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.010055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.010371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.010379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.010693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.010702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.011020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.011029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.011312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.011323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.011474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.011482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.011807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.011815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.012141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.012149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.012455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.012464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.012758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.012768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.013072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.013080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.013402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.013709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.013717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.014043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.014052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.014362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.014370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.014716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.014724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.015066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.015074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.015366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.015374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.015650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.015814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.015822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.016136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.016144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.016478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.016486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.016795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.016803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.017110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.017119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.017308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.017316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.017623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.017632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.017914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.017923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.018247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.018256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.018546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.018554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.018861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.018869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.121 [2024-11-05 16:59:52.019185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.121 [2024-11-05 16:59:52.019193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.121 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.019493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.019502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.019834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.019843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.020015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.020022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.020333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.020340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.020687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.020695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.021019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.021028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.021359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.021367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.021677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.021685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.022001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.022010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.022179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.022187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.022488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.022496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.022687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.022696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.022952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.022960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.023287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.023298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.023590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.023600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.023914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.023922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.024225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.024233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.024539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.024547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.024831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.024839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.025192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.025199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.025511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.025520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.025816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.025825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.026172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.026179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.026580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.026588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.026895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.026904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.027147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.027155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.027543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.027551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.027815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.027823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.028107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.028115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.028434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.028442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.028741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.028752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.029032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.029040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.029348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.029357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.029510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.029519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.029827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.029835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.030144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.030153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.030465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.030474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.030844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.122 [2024-11-05 16:59:52.030852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.122 qpair failed and we were unable to recover it. 00:35:45.122 [2024-11-05 16:59:52.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.031153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.031467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.031475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.031794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.031804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.031976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.032307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.032315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.032490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.032498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.032789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.032797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.033108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.033116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.033413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.033422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.033727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.033734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.034024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.034032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.034384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.034392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.034689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.034697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.035014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.035022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.035343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.035679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.035687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.035850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.035858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.036125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.036133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.036448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.036456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.036767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.036777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.037079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.037086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.037430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.037439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.037741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.037752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.038067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.038075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.038366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.038375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.038689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.038696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.038994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.039003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.039314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.039322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.039688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.039696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.040000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.040008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.040318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.040326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.040483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.040492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.040799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.040808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.041103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.041110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.041421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.041429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.041744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.041755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.042069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.042077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.042388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.123 [2024-11-05 16:59:52.042396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.123 qpair failed and we were unable to recover it. 00:35:45.123 [2024-11-05 16:59:52.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.042719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.043008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.043170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.043178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.043490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.043498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.043801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.043811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.044141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.044149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.044345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.044353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.044682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.044691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.044997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.045005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.045320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.045329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.045623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.045632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.045961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.045971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.046279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.046287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.046599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.046609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.046934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.046942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.047257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.047266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.047435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.047444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.047742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.047753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.048071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.048080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.048272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.048280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.048449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.048457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.048794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.048803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.049107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.049115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.049422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.049430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.049756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.049765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.050066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.050074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.050346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.050354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.050670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.050939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.050948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.051252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.051260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.051592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.051600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.051902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.051910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.052250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.052459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.124 [2024-11-05 16:59:52.052467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.124 qpair failed and we were unable to recover it. 00:35:45.124 [2024-11-05 16:59:52.052783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.052791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.053115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.053123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.053430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.053438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.053751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.053760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.054070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.054078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.054386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.054393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.054706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.054714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.054885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.054893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.055072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.055079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.055380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.055388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.055716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.055727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.056046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.056054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.056206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.056214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.056527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.056535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.056727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.056941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.056949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.057249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.057257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.057566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.057575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.057888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.057897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.058199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.058208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.058510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.058518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.058863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.058872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.059061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.059069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.059388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.059397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.059731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.059739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.060059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.060411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.060420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.060731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.060739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.061089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.061097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.061398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.061407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.061726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.061735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.062036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.062044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.062397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.062406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.062714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.062725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.063096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.063105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.063275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.063284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.063582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.063591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.063908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.063917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.125 [2024-11-05 16:59:52.064255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.125 [2024-11-05 16:59:52.064264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.125 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.064573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.064583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.064915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.064923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.065249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.065257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.065567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.065576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.065887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.065896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.066208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.066216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.066422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.066431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.066770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.066779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.067101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.067109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.067399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.067408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.067716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.067724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.068046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.068358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.068366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.068669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.068998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.069006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.069391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.069398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.069707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.069714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.070022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.070030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.070340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.070348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.070662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.070670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.070982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.070991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.071248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.071255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.071571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.071579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.071959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.071967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.072253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.072554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.072562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.072911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.072919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.073082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.073090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.073411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.073419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.073589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.073597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.073912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.073920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.074232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.074240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.074551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.074560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.074871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.074879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.075120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.075431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.075439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.075645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.075961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.126 [2024-11-05 16:59:52.075969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.126 qpair failed and we were unable to recover it. 00:35:45.126 [2024-11-05 16:59:52.076144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.076152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.076417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.076426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.076496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.076505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.076679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.076687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.076871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.077171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.077179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.077367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.077376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.077550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.077560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.077868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.077876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.078185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.078193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.078498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.078507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.078688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.078696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.078861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.078869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.079202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.079212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.079395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.079404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.079586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.079594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.079769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.079777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.080069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.080076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.080241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.080250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.080481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.080489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.080809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.080818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.081139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.081147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.081315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.081324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.081429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.081437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.081579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.081587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.081756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.081764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.081947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.081956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.082246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.082254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.082408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.082415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.082688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.082695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 [2024-11-05 16:59:52.082786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.127 [2024-11-05 16:59:52.082794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.127 qpair failed and we were unable to recover it. 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Write completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Write completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Write completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Write completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Write completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Write completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.127 starting I/O failed 00:35:45.127 Read completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 Read completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 Write completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 Write completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 Read completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 Read completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 Write completed with error (sct=0, sc=8) 00:35:45.128 starting I/O failed 00:35:45.128 [2024-11-05 16:59:52.083089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:45.128 [2024-11-05 16:59:52.083438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.083454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24900c0 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.083816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.083830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24900c0 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.084198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.084209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.084412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.084420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.084596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.084604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.084852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.084860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.085250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.085257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.085475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.085483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.085697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.085704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.085865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.086065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.086073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.086239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.086512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.086520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.086707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.086715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.086874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.086883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.087165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.087174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.087491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.087499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.087806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.087814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.088098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.088106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.088418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.088425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.088616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.088624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.088785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.088794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.089065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.089074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.089392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.089401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.089719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.089727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.090053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.090061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.090262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.090271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.090602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.090610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.090925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.090933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.091245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.091252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.091569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.091577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.128 [2024-11-05 16:59:52.091751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.128 [2024-11-05 16:59:52.091761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.128 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.092163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.092171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.092486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.092494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.092814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.092823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.093151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.093159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.093432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.093759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.093768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.094107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.094115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.094422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.094431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.094753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.094919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.094929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.095249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.095258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.095602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.095612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.095916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.095924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.096253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.096261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.096565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.096573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.096878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.096886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.097152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.097160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.097475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.097484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.097653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.097662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.097957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.097965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.098271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.098280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.098588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.098595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.098918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.098927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.099244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.099255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.099569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.099577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.099940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.099949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.100251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.100260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.100570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.100578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.100924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.100932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.101257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.101266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.101459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.101468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.101742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.101754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.102091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.102099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.102416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.102425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.102728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.102736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.103060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.103069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.103377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.103386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.103705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.129 [2024-11-05 16:59:52.103714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.129 qpair failed and we were unable to recover it. 00:35:45.129 [2024-11-05 16:59:52.104025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.104034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.104341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.104350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.104538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.104548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.104704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.104714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.104992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.105000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.105323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.105491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.105815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.105824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.106117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.106125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.106404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.106413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.106580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.106968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.106977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.107311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.107320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.107713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.107722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.108041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.108049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.108373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.108381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.108714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.108722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.109031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.109039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.109350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.109359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.109756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.109764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.110065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.110074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.110379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.110387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.110695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.110703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.111020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.111028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.111321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.111329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.111641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.111651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.111990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.111997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.112327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.112336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.112621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.112630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.112926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.112933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.113248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.113257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.113570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.113923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.113931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.114237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.114246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.114545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.114554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.114870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.114878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.115216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.115223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.115533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.130 [2024-11-05 16:59:52.115541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.130 qpair failed and we were unable to recover it. 00:35:45.130 [2024-11-05 16:59:52.115854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.115862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.116178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.116186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.116478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.116486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.116798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.116807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.117018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.117026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.117349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.117357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.117635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.117643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.117967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.117976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.118287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.118295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.118536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.118545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.118881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.118889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.119192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.119199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.119533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.119541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.119717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.119726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.120064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.120074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.120352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.120361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.120532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.120540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.120703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.120712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.120876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.120885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.121144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.121152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.121478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.121486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.121668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.121676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.121859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.121868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.122224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.122232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.122540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.122548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.122706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.122714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.122879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.122886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.123191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.123199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.123478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.123486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.123523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.123530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.123804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.123812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.124144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.124453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.124460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.124497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.124890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.124898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.125193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.125200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.125538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.125546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.125717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.125725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.131 [2024-11-05 16:59:52.126037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.131 [2024-11-05 16:59:52.126044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.131 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.126352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.126361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.126661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.126670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.126858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.126866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.127209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.127426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.127435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.127600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.127608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.127893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.127901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.128212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.128220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.128381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.128391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.128657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.128664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.128827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.128834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.128872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.128880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.129163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.129170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.129480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.129487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.129816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.129824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.129866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.129875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.130182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.130190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.130471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.130479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.130796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.130804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.131125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.131132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.131286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.131294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.131572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.131579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.131742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.131755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.131911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.131919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.132203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.132212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.132355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.132364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.132674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.132682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.132863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.133038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.133047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.133369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.133376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.133548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.133557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.133737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.133744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.133933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.133941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.134297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.134305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.134525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.134533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.134700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.134707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.134894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.134902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.135068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.132 [2024-11-05 16:59:52.135077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.132 qpair failed and we were unable to recover it. 00:35:45.132 [2024-11-05 16:59:52.135346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.135353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.135661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.135670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.136031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.136040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.136331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.136338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.136495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.136503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.136815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.136823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.137136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.137422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.137430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.137733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.137741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.138052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.138060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.138372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.138379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.138694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.138703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.139006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.139015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.139201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.139209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.139425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.139433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.139734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.139742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.140025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.140034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.140386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.140396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.140583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.140591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.140904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.140912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.141215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.141223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.141375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.141383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.141606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.141613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.141929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.141937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.142116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.142125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.142298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.142305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.142618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.142626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.142869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.142878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.143193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.143202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.143492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.143500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.143809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.143817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.144132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.144139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.144414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.144422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.144794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.144803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.133 [2024-11-05 16:59:52.145099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.133 [2024-11-05 16:59:52.145109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.133 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.145303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.145312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.145597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.145606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.145961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.145970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.146277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.146466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.146475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.146788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.146796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.147095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.147104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.147411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.147419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.147729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.147738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.148074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.148082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.148410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.148419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.148738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.148754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.149076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.149084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.149356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.149365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.149692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.149701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.150002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.150011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.150323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.150332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.150641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.150650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.150957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.150966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.151134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.151143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.151338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.151346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.151621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.151630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.151931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.151942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.152251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.152259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.152547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.152556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.152881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.152889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.153226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.153234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.153402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.153411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.153705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.153712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.154038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.154047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.154379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.154387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.154692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.154700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.155015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.155023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.155340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.155348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.155681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.155690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.156006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.156015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.156337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.156345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.156669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.134 [2024-11-05 16:59:52.156677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.134 qpair failed and we were unable to recover it. 00:35:45.134 [2024-11-05 16:59:52.157004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.135 [2024-11-05 16:59:52.157012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.135 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.157317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.157326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.157642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.157651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.157964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.157972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.158151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.158160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.158468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.158476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.158791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.158800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.159146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.159154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.159495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.159504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.159659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.159668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.159973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.160302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.160310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.160628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.412 [2024-11-05 16:59:52.160636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.412 qpair failed and we were unable to recover it. 00:35:45.412 [2024-11-05 16:59:52.160959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.160967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.161169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.161178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.161594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.161602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.161917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.161926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.162119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.162128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.162397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.162404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.162741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.162754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.162920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.163240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.163248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.163563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.163572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.163886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.163894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.164225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.164235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.164525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.164532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.164691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.164699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.164890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.164899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.165174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.165183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.165365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.165373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.165731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.166060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.166069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.166391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.166400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.166753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.166761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.166939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.166947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.167241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.167248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.167425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.167433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.167775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.167783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.168133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.168141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.168464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.168472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.168859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.168867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.169058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.169067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.169387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.169394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.169737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.169752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.169959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.169967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.170143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.170151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.170447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.170454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.170765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.170773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.171085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.171093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.171400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.171407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.171719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.413 [2024-11-05 16:59:52.171727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.413 qpair failed and we were unable to recover it. 00:35:45.413 [2024-11-05 16:59:52.172021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.172029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.172326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.172335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.172489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.172497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.172828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.172836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.173028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.173037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.173344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.173352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.173619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.173628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.173961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.173969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.174292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.174301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.174609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.174618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.174917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.174926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.175251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.175259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.175567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.175576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.175872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.175882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.176051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.176059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.176381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.176388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.176594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.176603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.176784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.176793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.177126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.177134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.177445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.177455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.177762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.177771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.177968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.177976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.178224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.178233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.178541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.178550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.178852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.178860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.179201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.179210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.179395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.179404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.179720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.179728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.180040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.180049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.180206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.180215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.180514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.180524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.180854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.180862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.181174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.181181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.181514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.181522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.181698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.181707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.182062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.182381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.182389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.182553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.414 [2024-11-05 16:59:52.182562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.414 qpair failed and we were unable to recover it. 00:35:45.414 [2024-11-05 16:59:52.182742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.182753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.183053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.183062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.183352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.183361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.183664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.183672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.183994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.184150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.184160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.184480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.184488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.184721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.184730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.185044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.185052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.185357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.185366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.185678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.185687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.185861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.185868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.186184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.186192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.186519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.186527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.186839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.186847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.187120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.187131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.187434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.187443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.187834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.187843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.188187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.188195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.188378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.188386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.188652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.188660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.188876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.188884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.189233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.189240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.189515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.189524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.189862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.189869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.190195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.190203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.190515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.190524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.190834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.190842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.191128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.191329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.191337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.191575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.191583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.191889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.191898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.192242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.192249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.192558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.192568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.192894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.192902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.193322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.193331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.193637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.193645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.193961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.193969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.415 [2024-11-05 16:59:52.194273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.415 [2024-11-05 16:59:52.194282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.415 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.194617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.194626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.194817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.194824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.195003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.195010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.195357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.195365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.195564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.195573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.195794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.195803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.195986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.195995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.196269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.196278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.196435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.196443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.196661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.196669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.196958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.196966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.197342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.197636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.197646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.197971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.197980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.198257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.198265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.198568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.198576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.198872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.198881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.199207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.199216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.199521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.199531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.199820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.199828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.200160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.200168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.200472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.200480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.200789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.200798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.200972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.200981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.201289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.201298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.201642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.201650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.202031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.202040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.202415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.202423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.202716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.202724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.203079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.203255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.203265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.203498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.203507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.203799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.203809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.203996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.204005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.204228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.204237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.204584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.204593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.204774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.204783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.204949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.416 [2024-11-05 16:59:52.204956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.416 qpair failed and we were unable to recover it. 00:35:45.416 [2024-11-05 16:59:52.205234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.205241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.205588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.205596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.205886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.206251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.206259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.206429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.206439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.206771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.206780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.207080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.207088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.207243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.207251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.207579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.207587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.207752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.207760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.207942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.207950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.208131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.208141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.208320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.208328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.208515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.208524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.208751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.208759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.209039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.209048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.209227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.209235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.209413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.209421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.209807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.209829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.209974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.209981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.210181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.210189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.210498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.210506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.210707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.210715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.210894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.210903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.211196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.211204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.211397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.211406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.211561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.211569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.211751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.211760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.212051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.212059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.212099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.212106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.212447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.212455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.212775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.212783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.212960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.212968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.417 [2024-11-05 16:59:52.213256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.417 [2024-11-05 16:59:52.213264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.417 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.213578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.213586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.213880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.213889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.214157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.214166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.214346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.214354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.214530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.214537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.214842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.214852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.215210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.215217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.215387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.215395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.215687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.215697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.215997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.216006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.216311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.216319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.216624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.216632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.216802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.216809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.217129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.217137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.217450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.217458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.217668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.217676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.218019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.218027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.218304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.218312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.218624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.218634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.218946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.218954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.219270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.219278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.219469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.219478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.219804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.219812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.220147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.220156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.220466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.220476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.220769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.220779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.220960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.220969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.221054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.221230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.221238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.221387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.221395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.221709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.221717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.221888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.221897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.222230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.222238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.222546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.222554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.222726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.222735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.223018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.223026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.223299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.223308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.223619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.418 [2024-11-05 16:59:52.223628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.418 qpair failed and we were unable to recover it. 00:35:45.418 [2024-11-05 16:59:52.223913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.223921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.224230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.224238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.224553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.224860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.224868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.225176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.225184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.225510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.225518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.225829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.225837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.226158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.226166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.226456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.226464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.226761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.226770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.227034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.227042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.227358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.227367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.227556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.227564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.227947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.227956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.228225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.228233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.228543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.228552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.228852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.228861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.229183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.229191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.229507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.229515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.229844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.229852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.230179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.230187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.230356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.230365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.230679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.230687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.230857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.230867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.231193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.231376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.231384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.231548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.231557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.231854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.231863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.232161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.232168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.232344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.232354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.232670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.232678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.232995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.233003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.233323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.233332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.233602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.233610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.233932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.233940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.234238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.234246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.234540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.234548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.234701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.234708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.419 [2024-11-05 16:59:52.234858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.419 [2024-11-05 16:59:52.234866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.419 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.235024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.235032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.235342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.235350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.235662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.235671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.235982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.235990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.236346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.236354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.236642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.236649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.236961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.236969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.237133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.237142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.237414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.237421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.237751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.237760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.237935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.237943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.238307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.238314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.238627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.238635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.238930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.238938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.239244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.239252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.239441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.239450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.239794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.239803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.240025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.240034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.240363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.240371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.240680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.240973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.240981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.241272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.241280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.241589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.241597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.241908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.241916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.242216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.242224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.242540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.242549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.242946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.242954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.243279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.243291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.243632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.243641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.243838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.243846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.244047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.244056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.244242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.244249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.244547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.244555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.244806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.244814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.245116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.245124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.245411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.245419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.245750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.245759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.420 [2024-11-05 16:59:52.246053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.420 [2024-11-05 16:59:52.246062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.420 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.246256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.246265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.246425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.246433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.246717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.246725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.246905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.246914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.247232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.247241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.247545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.247553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.247722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.247732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.248025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.248033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.248358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.248366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.248680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.248688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.248846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.248854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.249034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.249042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.249194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.249202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.249495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.249502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.249810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.249818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.250188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.250197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.250507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.250516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.250851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.250858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.251185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.251192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.251525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.251533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.251575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.251584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.251723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.251731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.252046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.252054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.252386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.252394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.252547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.252556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.252739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.252754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.252792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.252800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.253085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.253092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.253290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.253298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.253607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.253617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.253739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.253751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.253943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.253951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.254131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.254140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.421 [2024-11-05 16:59:52.254304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.421 [2024-11-05 16:59:52.254311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.421 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.254636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.254644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.254681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.254687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.254724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.254730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.255021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.255028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.255332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.255340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.255657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.255665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.255832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.255839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.256111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.256119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.256455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.256463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.256658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.256666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.256718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.256725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.256912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.256919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.257107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.257115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.257281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.257290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.257478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.257486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.257782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.257790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.257981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.257989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.258320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.258327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.258505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.258513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.258788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.258795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.258985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.258994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.259309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.259317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.259627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.259635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.259999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.260007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.260311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.260319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.260646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.260654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.260962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.260971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.261140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.261149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.261415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.261423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.261733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.262062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.262071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.262399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.262408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.262633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.262641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.262979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.262987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.263319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.263327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.263636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.263644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.263799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.263807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.264127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.422 [2024-11-05 16:59:52.264134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.422 qpair failed and we were unable to recover it. 00:35:45.422 [2024-11-05 16:59:52.264465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.264473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.264768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.264776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.265098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.265105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.265404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.265412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.265703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.265712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.265881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.265890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.266198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.266206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.266513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.266521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.266816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.266824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.267158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.267165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.267476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.267484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.267786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.267795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.268084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.268092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.268398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.268407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.268717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.268896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.268905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.269199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.269208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.269475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.269483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.269791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.269799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.270127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.270135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.270467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.270475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.270785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.270794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.271172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.271180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.271331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.271340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.271671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.271681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.271858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.271865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.272038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.272045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.272364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.272372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.272666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.272674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.272959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.272967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.273201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.273209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.273514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.273522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.273887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.273895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.274197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.274517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.274525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.274872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.274881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.275213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.275221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.275531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.275539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.423 [2024-11-05 16:59:52.275863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.423 [2024-11-05 16:59:52.275871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.423 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.276190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.276199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.276353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.276361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.276695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.276704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.277021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.277029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.277339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.277347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.277677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.277685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.277959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.277968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.278137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.278323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.278331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.278633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.278641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.278949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.278958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.279274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.279282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.279597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.279605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.279912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.279920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.280227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.280235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.280548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.280556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.280864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.280873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.281202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.281210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.281518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.281526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.281827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.281834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.281999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.282006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.282321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.282634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.282643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.282814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.282822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.283135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.283142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.283473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.283482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.283787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.284095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.284103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.284416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.284424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.284728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.284736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.285020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.285029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.285336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.285344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.285619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.285627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.285961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.285970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.286268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.286277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.286593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.286601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.286758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.286766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.286959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.424 [2024-11-05 16:59:52.286966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.424 qpair failed and we were unable to recover it. 00:35:45.424 [2024-11-05 16:59:52.287247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.287255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.287584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.287592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.287896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.287905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.288207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.288216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.288368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.288376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.288676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.288684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.288998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.289007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.289300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.289308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.289581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.289589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.289902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.289910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.290207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.290215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.290368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.290376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.290693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.290700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.291031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.291039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.291349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.291653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.291661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.291973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.291982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.292294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.292303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.292618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.292626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.292780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.292788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.293074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.293083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.293253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.293262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.293585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.293593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.293898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.293906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.294261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.294269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.294575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.294583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.294895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.294903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.295190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.295200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.295513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.295522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.295865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.295874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.296094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.296103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.296413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.296421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.296729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.296737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.297114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.297123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.297172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.297178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.297458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.425 [2024-11-05 16:59:52.297466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.425 qpair failed and we were unable to recover it. 00:35:45.425 [2024-11-05 16:59:52.297635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.297643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.297842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.297850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.298132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.298371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.298380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.298724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.298732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.298905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.298914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.299246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.299253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.299568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.299576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.299860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.299869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.300206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.300214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.300585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.300593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.300897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.300905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.301247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.301254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.301554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.301562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.301877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.301886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.302200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.302208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.302503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.302512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.302779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.302787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.302960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.302968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.303170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.303177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.303494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.303501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.303835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.303843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.304230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.304238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.304536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.304544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.304872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.304881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.305204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.305213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.305522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.305530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.305843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.305852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.306193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.306202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.306531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.306539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.306875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.306884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.307204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.307214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.307508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.307516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.307814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.307822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.426 [2024-11-05 16:59:52.307992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.426 [2024-11-05 16:59:52.308000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.426 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.308316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.308324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.308614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.308622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.308914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.308922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.309241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.309249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.309606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.309615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.309950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.309958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.310146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.310155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.310196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.310203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.310469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.310476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.310629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.310637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.310957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.310965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.311280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.311288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.311593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.311601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.311759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.311767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.312035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.312043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.312380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.312388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.312689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.312697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.313047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.313055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.313350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.313359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.313511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.313519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.313832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.313840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.314148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.314156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.314486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.314494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.314849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.314857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.315161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.315168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.315553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.315562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.315742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.315757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.315923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.315931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.316252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.316260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.316587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.316595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.316910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.316919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.316957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.316964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.317001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.317007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.317171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.317180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.317506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.317513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.317675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.317684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.317877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.317886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.318101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.427 [2024-11-05 16:59:52.318109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.427 qpair failed and we were unable to recover it. 00:35:45.427 [2024-11-05 16:59:52.318316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.318325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.318612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.318621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.318776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.318783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.318977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.318985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.319208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.319216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.319514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.319522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.319692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.319700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.319882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.319890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.320053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.320061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.320363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.320370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.320742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.320754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.321039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.321047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.321209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.321219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.321292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.321299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.321582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.321591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.321665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.321673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.321873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.321881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.322197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.322204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.322561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.322569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.322881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.322889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.323240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.323249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.323418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.323426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.323578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.323587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.323749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.323759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.324123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.324131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.324312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.324321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.324608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.324616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.324914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.324923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.324964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.324971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.325323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.325331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.325501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.325511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.325683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.325691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.325981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.325990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.326308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.326316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.326640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.326648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.326978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.326987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.327313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.327322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.327522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.327531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.428 qpair failed and we were unable to recover it. 00:35:45.428 [2024-11-05 16:59:52.327916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.428 [2024-11-05 16:59:52.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.328096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.328103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.328359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.328366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.328547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.328555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.328882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.328890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.329202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.329209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.329399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.329679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.329687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.329902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.329910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.330197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.330206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.330368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.330375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.330564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.330572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.330870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.330879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.331200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.331207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.331517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.331525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.331843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.331851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.332189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.332198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.332509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.332518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.332831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.332840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.333152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.333160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.333453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.333461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.333771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.333780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.334091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.334099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.334408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.334416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.334714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.334721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.334915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.334924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.335102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.335110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.335310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.335318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.335613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.335621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.335912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.335920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.336218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.336226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.336539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.336548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.336883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.336891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.337201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.337209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.337523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.337532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.337860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.337868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.338016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.338024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.338334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.338342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.338650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.429 [2024-11-05 16:59:52.338659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.429 qpair failed and we were unable to recover it. 00:35:45.429 [2024-11-05 16:59:52.338970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.338978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.339270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.339280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.339588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.339596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.339907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.339916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.340218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.340227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.340436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.340444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.340744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.340757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.340949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.340957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.341264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.341272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.341610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.341618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.341772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.342086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.342094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.342406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.342414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.342712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.342720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.343044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.343052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.343203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.343211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.343484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.343492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.343669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.343678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.343957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.343965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.344304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.344312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.344614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.344623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.344913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.344923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.345196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.345204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.345517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.345526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.345857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.345866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.346193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.346202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.346511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.346520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.346837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.346845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.347028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.430 [2024-11-05 16:59:52.347045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.430 qpair failed and we were unable to recover it. 00:35:45.430 [2024-11-05 16:59:52.347364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.347372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.347682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.347691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.347968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.347976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.348136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.348144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.348454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.348462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.348774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.348782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.349098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.349107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.349433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.349441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.349738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.349753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.350034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.350043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.350231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.350239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.350557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.350565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.350734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.350750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.350934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.350943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.351256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.351265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.351453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.351461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.351763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.351771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.352076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.352084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.352393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.352401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.352667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.352676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.353036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.353046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.353431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.353439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.353749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.353758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.354059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.354067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.354239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.354249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.354556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.354564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.354885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.354893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.355076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.355085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.355367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.355376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.355693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.355702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.355889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.355898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.356177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.356184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.356337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.356345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.356525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.356532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.356847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.356856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.357193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.357201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.357489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.357498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.357812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.431 [2024-11-05 16:59:52.357821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.431 qpair failed and we were unable to recover it. 00:35:45.431 [2024-11-05 16:59:52.358129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.358138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.358425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.358434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.358724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.358733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.358903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.358913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.359195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.359203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.359396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.359404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.359597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.359606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.359912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.359920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.360249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.360257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.360452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.360460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.360751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.360759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.361118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.361126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.361426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.361433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.361742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.361754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.362049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.362060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.362363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.362656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.362664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.362971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.362979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.363276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.363285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.363631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.363639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.363809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.363817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.364058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.364066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.364437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.364445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.364799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.364807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.364954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.364962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.365131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.365139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.365465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.365474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.365785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.365793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.365839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.365847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.366049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.366057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.366377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.366386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.366556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.366564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.366731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.366739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.366786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.366792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.367093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.432 [2024-11-05 16:59:52.367101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.432 qpair failed and we were unable to recover it. 00:35:45.432 [2024-11-05 16:59:52.367416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.367424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.367727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.367735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.368044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.368052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.368359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.368367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.368674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.368683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.368971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.368980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.369169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.369177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.369458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.369632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.369639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.369872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.369880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.370079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.370088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.370410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.370418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.370727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.370735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.370776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.370783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.370950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.371113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.371122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.371274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.371282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.371622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.371630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.372007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.372015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.372177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.372188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.372409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.372417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.372603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.372611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.372901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.372909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.373218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.373227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.373537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.373544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.373851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.373860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.374022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.374030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.374214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.374221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.374386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.374395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.374634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.374642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.374680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.374686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.374869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.374878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.375165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.375173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.375436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.375444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.375594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.375602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.375874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.375882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.376066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.433 [2024-11-05 16:59:52.376073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.433 qpair failed and we were unable to recover it. 00:35:45.433 [2024-11-05 16:59:52.376333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.376342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.376517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.376526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.376723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.377008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.377017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.377166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.377173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.377488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.377496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.377862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.377871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.378152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.378160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.378334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.378343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.378562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.378570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.378868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.378878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.379171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.379508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.379516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.379869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.379877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.380175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.380495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.380503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.380818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.380826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.381009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.381018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.381298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.381305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.381622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.381630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.381941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.381949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.382301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.382309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.382598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.382609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.382911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.382919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.383227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.383236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.383549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.383557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.383987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.384300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.384308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.384627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.384635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.384962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.384970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.385150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.385158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.385472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.385480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.385792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.385800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.385977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.385985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.386314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.386322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.386680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.386689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.387001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.387010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.434 [2024-11-05 16:59:52.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.434 [2024-11-05 16:59:52.387378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.434 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.387709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.387717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.387889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.387897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.388204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.388212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.388519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.388528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.388722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.388731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.389043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.389051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.389363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.389371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.389682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.389691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.390003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.390011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.390337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.390345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.390653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.390661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.391035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.391334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.391342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.391630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.391639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.391968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.391976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.392323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.392332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.392654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.392663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.392922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.392929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.393239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.393247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.393555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.393738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.393750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.394053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.394061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.394409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.394417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.394749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.394757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.394927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.394937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.395208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.395216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.395380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.395389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.395586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.395594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.395759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.395768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.396059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.396068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.435 [2024-11-05 16:59:52.396377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.435 [2024-11-05 16:59:52.396385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.435 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.396696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.396704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.396743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.396761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.396947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.396955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.397272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.397280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.397588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.397596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.397632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.397639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.397931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.397940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.398135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.398143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.398459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.398757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.398765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.398921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.398929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.399273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.399280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.399461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.399468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.399756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.399764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.400059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.400067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.400374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.400383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.400546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.400554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.400819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.400827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.401132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.401140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.401469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.401479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.401522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.401531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.401825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.401833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.402192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.402200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.402242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.402248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.402506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.402692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.402700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.402986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.402994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.403145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.403152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.403350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.403357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.403637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.403644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.403805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.403814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.403965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.403973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.404333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.404340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.404512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.404521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.404703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.404710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.404889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.404898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.405208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.405215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.436 [2024-11-05 16:59:52.405412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.436 [2024-11-05 16:59:52.405420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.436 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.405595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.405602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.405876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.405884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.406225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.406233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.406388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.406398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.406556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.406565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.406736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.406749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.406913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.407099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.407107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.407158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.407471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.407659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.407667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.407708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.407717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.408039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.408047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.408225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.408234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.408618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.408626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.408803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.408813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.409089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.409097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.409412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.409420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.409726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.409734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.410014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.410022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.410390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.410398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.410700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.410708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.411013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.411023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.411333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.411341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.411643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.411652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.411965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.411973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.412270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.412278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.412593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.412921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.412929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.413124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.413132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.413462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.413470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.413783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.413791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.414120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.414129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.414436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.437 [2024-11-05 16:59:52.414445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.437 qpair failed and we were unable to recover it. 00:35:45.437 [2024-11-05 16:59:52.414790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.414799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.415108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.415116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.415440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.415449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.415756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.415764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.416117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.416125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.416432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.416439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.416734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.416742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.417110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.417118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.417425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.417433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.417760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.418071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.418079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.418388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.418396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.418703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.418712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.419015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.419025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.419367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.419375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.419693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.419702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.420017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.420026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.420339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.420648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.420656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.420978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.420987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.421317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.421325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.421639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.421647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.422042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.422050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.422356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.422365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.422743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.422755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.423048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.423057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.423386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.423394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.423697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.423705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.424026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.424036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.424354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.424628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.424637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.424964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.424973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.425154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.425163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.425488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.425498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.425825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.425834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.426148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.426156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.426328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.426337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.426617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.426626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.438 [2024-11-05 16:59:52.426932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.438 [2024-11-05 16:59:52.426941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.438 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.427151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.427160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.427348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.427357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.427656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.427664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.427975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.427984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.428152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.428161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.428469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.428478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.428788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.428797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.428953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.428959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.429112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.429121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.429430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.429438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.429748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.429757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.430047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.430055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.430366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.430375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.430687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.430695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.430973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.430982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.431296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.431611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.431948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.431957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.432275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.432283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.432584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.432592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.432903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.432911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.433213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.433222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.433409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.433418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.433726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.433735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.434037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.434045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.434370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.434378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.434687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.434695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.434986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.434995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.435301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.435309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.435630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.435641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.435981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.435991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.436284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.436293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.436446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.436455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.436728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.436737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.437042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.437052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.437349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.437358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.437667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.437676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.437996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.439 [2024-11-05 16:59:52.438005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.439 qpair failed and we were unable to recover it. 00:35:45.439 [2024-11-05 16:59:52.438305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.438314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.438648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.438656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.438960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.438970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.439292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.439301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.439620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.439629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.439840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.439848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.440155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.440164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.440471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.440481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.440653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.440661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.440961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.441280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.441288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.441457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.441778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.441786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.442104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.442113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.442424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.442433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.442716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.442725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.443012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.443020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.443300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.443309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.443628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.443637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.443959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.443968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.444296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.444630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.444639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.444923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.444931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.445254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.445263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.445580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.445916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.445925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.446250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.446258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.446461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.446792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.446802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.446951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.446959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.447179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.447187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.447457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.447467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.447758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.440 [2024-11-05 16:59:52.447972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.440 [2024-11-05 16:59:52.447980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.440 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.448291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.448299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.448603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.448611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.449000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.449009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.449383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.449391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.449699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.449707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.450010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.450019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.450330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.450338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.450666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.450675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.450995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.451004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.451314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.451322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.451634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.451642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.451944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.451954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.452304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.452313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.452621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.452630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.452910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.452919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.453214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.453222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.453534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.453854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.453863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.454182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.454190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.454485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.454495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.454840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.454848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.455002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.455011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.455321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.455329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.455623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.455632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.455945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.455953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.456262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.456270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.456569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.456578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.456872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.456880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.457194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.457203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.457358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.457367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.457679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.457687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.457858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.457867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.458197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.458205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.458513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.458521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.458844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.458852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.459163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.459172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.459481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.459489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.459844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.441 [2024-11-05 16:59:52.459855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.441 qpair failed and we were unable to recover it. 00:35:45.441 [2024-11-05 16:59:52.460136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.442 [2024-11-05 16:59:52.460145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.442 qpair failed and we were unable to recover it. 00:35:45.442 [2024-11-05 16:59:52.460440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.442 [2024-11-05 16:59:52.460448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.442 qpair failed and we were unable to recover it. 00:35:45.442 [2024-11-05 16:59:52.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.442 [2024-11-05 16:59:52.460777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.442 qpair failed and we were unable to recover it. 00:35:45.712 [2024-11-05 16:59:52.461130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.461139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.461441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.461449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.461788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.461797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.462105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.462114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.462466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.462474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.462771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.462780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.463086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.463095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.463404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.463413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.463724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.463734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.464035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.464044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.464391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.464400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.464702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.464711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.465012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.465021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.465343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.465352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.465530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.465539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.465814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.465823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.466141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.466150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.466462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.466471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.466803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.466812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.467201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.467209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.467509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.467519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.467676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.467685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.467882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.467891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.468174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.468183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.468335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.468344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.468533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.468542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.468858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.469185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.469194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.469472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.469482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.469798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.469807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.470119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.470127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.470443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.470452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.470769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.470777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.470970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.470979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.471332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.471340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.471624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.471632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.471960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.713 [2024-11-05 16:59:52.471970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.713 qpair failed and we were unable to recover it. 00:35:45.713 [2024-11-05 16:59:52.472313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.472321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.472482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.472799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.472807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.473139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.473147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.473457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.473465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.473770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.474086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.474094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.474264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.474273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.474453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.474461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.474800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.474809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.475119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.475127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.475422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.475429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.475725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.475733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.476061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.476070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.476402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.476410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.476596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.476604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.476914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.476922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.477235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.477243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.477575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.477583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.477893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.477901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.478200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.478208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.478397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.478405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.478722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.478731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.479031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.479039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.479204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.479212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.479439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.479448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.479647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.479655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.479949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.479956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.479995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.480001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.480310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.480318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.480481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.480489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.480533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.480848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.480857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.481188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.481197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.481381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.481389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.481598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.481606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.481902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.481910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.482221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.714 [2024-11-05 16:59:52.482229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.714 qpair failed and we were unable to recover it. 00:35:45.714 [2024-11-05 16:59:52.482501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.482509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.482683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.482694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.482876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.482885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.483027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.483035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.483102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.483110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.483402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.483410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.483721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.483729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.483800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.483807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.483844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.484062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.484070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.484387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.484395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.484696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.484705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.485029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.485037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.485223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.485231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.485525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.485533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.485700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.485708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.485928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.485936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.486251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.486259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.486447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.486455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.486768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.486776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.487112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.487312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.487321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.487493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.487502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.487853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.487861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.488029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.488037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.488219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.488226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.488524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.488531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.488874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.488882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.489070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.489078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.489406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.489415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.489726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.489735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.490033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.490041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.490358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.490366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.490675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.490684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.491001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.491008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.491200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.491208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.491522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.491530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.491729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.715 [2024-11-05 16:59:52.491736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.715 qpair failed and we were unable to recover it. 00:35:45.715 [2024-11-05 16:59:52.491920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.491927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.492081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.492089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.492438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.492446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.492763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.492774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.492930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.492938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.493242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.493250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.493573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.493580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.493895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.493904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.494291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.494299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.494594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.494602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.494893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.494901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.495243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.495251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.495524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.495532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.495896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.495904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.496234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.496243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.496556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.496563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.496880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.496888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.497157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.497165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.497486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.497494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.497811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.497819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.498147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.498155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.498326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.498334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.498600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.498609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.498919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.498926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.499214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.499222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.499400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.499408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.499726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.499733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.500134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.500142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.500452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.500460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.500756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.500765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.500959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.500968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.501290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.501298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.501602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.501610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.501766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.501775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.502087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.502095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.502250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.502257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.502470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.502477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.502740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.502753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.716 [2024-11-05 16:59:52.503073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.716 [2024-11-05 16:59:52.503081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.716 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.503393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.503401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.503718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.503726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.504037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.504045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.504350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.504358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.504704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.504715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.505014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.505024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.505346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.505355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.505650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.505657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.505979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.505987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.506159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.506168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.506339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.506348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.506641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.506937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.506946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:45.717 [2024-11-05 16:59:52.507254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.507263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:35:45.717 [2024-11-05 16:59:52.507575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.507584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:45.717 [2024-11-05 16:59:52.507901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.507910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.717 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.717 [2024-11-05 16:59:52.508201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.508210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.508528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.508537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.508859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.508871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.509171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.509179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.509486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.509495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.509806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.509814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.510119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.510128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.510455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.510463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.510776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.510785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.510998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.511006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.511322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.511630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.511638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.511946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.511955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.512174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.512183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.717 [2024-11-05 16:59:52.512489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.717 [2024-11-05 16:59:52.512499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.717 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.512796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.512805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.513029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.513348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.513356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.513665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.513673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.513954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.513962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.514274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.514282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.514609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.514618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.514914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.514922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.515264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.515272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.515498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.515508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.515813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.515822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.516149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.516159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.516518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.516526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.516857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.517177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.517185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.517494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.517502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.517794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.517803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.517969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.517977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.518279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.518287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.518597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.518605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.518919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.518928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.519241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.519250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.519564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.519573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.519884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.519893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.520077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.520084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.520426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.520434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.520586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.520594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.520870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.520879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.521252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.521260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.521567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.521576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.521886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.521895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.522235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.522244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.522397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.522600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.522608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.522789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.522798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.523116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.523124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.523308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.523317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.523508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.718 [2024-11-05 16:59:52.523516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.718 qpair failed and we were unable to recover it. 00:35:45.718 [2024-11-05 16:59:52.523708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.523716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.523894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.523903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.524206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.524214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.524539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.524548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.524835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.524843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.525241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.525249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.525549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.525557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.525873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.525881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.526060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.526068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.526355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.526363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.526675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.526683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.526864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.526872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.527187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.527195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.527495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.527506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.527818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.527827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.528026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.528035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.528251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.528259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.528554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.528562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.528866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.528875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.529073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.529082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.529357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.529366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.529545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.529554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.529839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.529849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.530018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.530026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.530331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.530339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.530519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.530528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.530870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.530879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.531096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.531105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.531288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.531297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.531504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.531512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.531678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.531687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.531870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.531879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.532168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.532176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.532485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.532494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.532671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.532680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.532867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.532876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.533037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.533046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.533223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.533230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.719 qpair failed and we were unable to recover it. 00:35:45.719 [2024-11-05 16:59:52.533555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.719 [2024-11-05 16:59:52.533563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.533877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.533885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.534193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.534201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.534505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.534513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.534739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.534751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.534925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.534932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.535233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.535241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.535421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.535430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.535757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.535767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.535925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.535933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.536211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.536218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.536532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.536541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.536847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.536855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.537061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.537070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.537108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.537115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.537402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.537411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.537581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.537590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.537802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.537811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.538141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.538149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.538302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.538310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.538623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.538630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.538958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.538966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.539139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.539147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.539474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.539481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.539827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.539835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.540149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.540158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.540472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.540481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.540775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.540783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.541097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.541105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.541409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.541418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.541609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.541617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.541934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.541942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.542294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.542302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.542611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.542619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.542913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.542922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.543256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.543264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.543572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.543580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.543875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.543885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.544207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.720 [2024-11-05 16:59:52.544215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.720 qpair failed and we were unable to recover it. 00:35:45.720 [2024-11-05 16:59:52.544371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.544379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.544696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.544704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.545032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.545042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.545446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.545455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.545752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.545760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.545966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.545974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.546280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.546288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.546595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.546604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.546820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.546828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.547160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.547168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.547461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.547469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.547778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.547786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.721 [2024-11-05 16:59:52.548124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.548139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.548259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.548266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.721 [2024-11-05 16:59:52.548562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.548571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.548742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.548757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.721 [2024-11-05 16:59:52.549053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.549063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.721 [2024-11-05 16:59:52.549240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.549248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.549635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.549644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.549951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.549959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.550283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.550291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.550620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.550627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.550934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.550943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.551307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.551315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.551614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.551623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.551763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.551772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.552000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.552008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.552168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.552177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.552479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.552486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.552817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.552825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.553156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.553164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.553477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.553486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.553795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.553803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.554062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.554071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.554393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.554401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.721 qpair failed and we were unable to recover it. 00:35:45.721 [2024-11-05 16:59:52.554715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.721 [2024-11-05 16:59:52.554724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.555015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.555024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.555329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.555338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.555608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.555616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.555906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.555916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.556234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.556243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.556555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.556563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.556880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.556889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.557075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.557084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.557433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.557443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.557774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.557782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.558101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.558110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.558387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.558395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.558554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.558563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.558917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.558925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.559110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.559119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.559437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.559445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.559763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.559773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.560074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.560082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.560253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.560264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.560599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.560606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.560914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.560922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.561230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.561238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.561565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.561574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.561888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.561896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.562200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.562208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.562381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.562390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.562711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.562718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.563031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.563039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.563211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.563219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.563563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.563571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.563879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.563887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.564197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.564206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.564520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.564527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.564836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.564845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.722 qpair failed and we were unable to recover it. 00:35:45.722 [2024-11-05 16:59:52.565187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.722 [2024-11-05 16:59:52.565195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.565347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.565355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.565664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.565673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.565975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.565983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.566293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.566302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.566608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.566617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.566913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.566921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.567255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.567263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.567581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.567590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.567899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.567910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.568237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.568245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.568539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.568547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.568862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.568870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.569025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.569033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.569334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.569672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.569680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.569993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.570001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.570322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.570330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.570639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.570646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.570942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.570951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.571261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.571270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.571590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.571598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.571910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.571919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.572107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.572116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.572426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.572434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.572778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.572786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.573004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.573014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.573311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.573319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.573676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.573684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.574011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.574020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.574330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.574339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.574670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.574678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.574967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.574976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.575289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.575297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.575476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.575484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.575821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.575830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.576089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.576097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.576405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.576413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.576710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.723 [2024-11-05 16:59:52.576718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.723 qpair failed and we were unable to recover it. 00:35:45.723 [2024-11-05 16:59:52.577012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.577020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.577366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.577374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.577682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.577691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.577978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.577987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.578290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.578298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.578577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.578586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.578901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.578910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.579077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.579085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.579284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.579293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.579573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.579581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.579776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.579785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.580076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.580085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.580428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.580438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.580592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.580609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.580918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.580926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.581237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.581246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.581550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.581864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.581873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.582172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.582180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.582490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.582499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.582824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.582832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.583149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.583158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.583469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.583479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.583862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.583871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.584202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.584210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.584549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.584557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.584882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.584890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.585199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.585540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.585549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.585866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.585875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.586186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.586194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.586560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.586568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.586960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.586968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.587275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.587283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.587457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.587465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 Malloc0 00:35:45.724 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.724 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:45.724 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.724 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.724 [2024-11-05 16:59:52.588320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.588340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.724 [2024-11-05 16:59:52.588647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.724 [2024-11-05 16:59:52.588657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.724 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.588839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.588848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.589015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.589023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.589293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.589302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.589664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.590048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.590057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.590339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.590347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.590711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.591014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.591023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.591340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.591349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.591474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.725 [2024-11-05 16:59:52.591527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.591535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.591852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.591861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.592210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.592218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.592531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.592540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.592861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.592870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.593272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.593280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.593585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.593593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.593922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.593931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.594113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.594122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.594437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.594445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.594773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.594781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.595108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.595116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.595274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.595282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.595584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.595591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.595904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.595912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.596158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.596166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.596445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.596453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.596726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.596736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.596996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.597004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.597209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.597218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.597525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.597534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.597860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.597868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.598178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.598187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.598503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.598511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.598682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.598691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.599009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.599017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 [2024-11-05 16:59:52.599384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.725 [2024-11-05 16:59:52.599392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.725 qpair failed and we were unable to recover it. 00:35:45.725 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.725 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:45.725 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.725 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.726 [2024-11-05 16:59:52.600061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.600075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.600403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.600413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.600733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.600742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.601080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.601088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.601408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.601416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.601738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.601750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.602122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.602130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.602432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.602441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.602631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.602640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.602940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.602949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.603310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.603318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.603606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.603613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.603914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.603923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.604253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.604261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.604568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.604576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.604899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.605257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.605265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.605565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.605573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.605860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.605869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.606207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.606215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.606408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.606416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.606713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.606722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.607037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.607046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.607340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.607348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.607673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.607681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.607861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.607869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.608145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.608153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.608488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.726 [2024-11-05 16:59:52.608636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.608644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.608936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.608944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:45.726 [2024-11-05 16:59:52.609270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.609279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.726 [2024-11-05 16:59:52.609471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.609479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.726 [2024-11-05 16:59:52.609800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.726 [2024-11-05 16:59:52.609808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.726 qpair failed and we were unable to recover it. 00:35:45.726 [2024-11-05 16:59:52.610000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.610009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.610201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.610209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.610506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.610514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.610849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.610857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.611045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.611053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.611102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.611110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.611403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.611412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.611610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.611618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.611782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.611789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.611852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.611861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.612099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.612106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.612300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.612308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.612627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.612635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.612966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.612975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.613154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.613162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.613484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.613494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.613662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.613670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.613883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.613891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.614221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.614229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.614272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.614278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.614545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.614555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.614884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.614893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.615201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.615209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.615390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.615400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.615688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.615697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.615875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.615883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.616153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.616161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.616330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.616339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.616528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.616536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.616759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.616768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.617147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.617154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.617317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.617326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.617642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.617650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.617964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.617973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.618305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.618314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.618488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.618752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.618760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.618931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.727 [2024-11-05 16:59:52.618940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.727 qpair failed and we were unable to recover it. 00:35:45.727 [2024-11-05 16:59:52.619131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.619139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.619418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.619426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.619750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.619759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.620073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.620082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.620237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.620509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.620518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.620817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.620825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.728 [2024-11-05 16:59:52.621231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.621240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.728 [2024-11-05 16:59:52.621552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.621561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.621717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.621726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.622021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.622030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.622389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.622397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.622681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.622690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.623003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.623011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.623324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.623333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.623688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.623696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.624003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.624011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.624321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.624330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.624643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.624651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.624851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.624860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.625117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.625126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.625303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.625526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.625535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.625809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.625818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.626156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.626165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.626471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.626780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.626791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.627111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.627120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.627418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.627427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.627704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.728 [2024-11-05 16:59:52.627740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.728 [2024-11-05 16:59:52.627751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cc4000b90 with addr=10.0.0.2, port=4420 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 [2024-11-05 16:59:52.632218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.728 [2024-11-05 16:59:52.632293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.728 [2024-11-05 16:59:52.632305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.728 [2024-11-05 16:59:52.632311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.728 [2024-11-05 16:59:52.632316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.728 [2024-11-05 16:59:52.632332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.728 qpair failed and we were unable to recover it. 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.728 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.728 [2024-11-05 16:59:52.642175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.728 [2024-11-05 16:59:52.642263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.728 [2024-11-05 16:59:52.642274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.728 [2024-11-05 16:59:52.642280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.728 [2024-11-05 16:59:52.642285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.642296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.729 16:59:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3368482 00:35:45.729 [2024-11-05 16:59:52.652160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.652214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.652224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.652229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.652234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.652244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.662176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.662229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.662239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.662244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.662249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.662259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.672139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.672191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.672201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.672206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.672213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.672224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.682074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.682124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.682134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.682139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.682143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.682154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.692163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.692243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.692253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.692258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.692263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.692275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.702094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.702147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.702156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.702162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.702167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.702177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.712274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.712326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.712336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.712341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.712346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.712356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.722144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.722196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.722206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.722211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.722216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.722226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.732304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.732353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.732363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.732368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.732372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.732382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.742319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.742428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.742438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.742444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.742449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.742459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.752363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.752418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.752427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.752432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.752437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.752447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.729 [2024-11-05 16:59:52.762238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.729 [2024-11-05 16:59:52.762283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.729 [2024-11-05 16:59:52.762296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.729 [2024-11-05 16:59:52.762301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.729 [2024-11-05 16:59:52.762306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.729 [2024-11-05 16:59:52.762316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.729 qpair failed and we were unable to recover it. 00:35:45.991 [2024-11-05 16:59:52.772399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.991 [2024-11-05 16:59:52.772445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.991 [2024-11-05 16:59:52.772455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.991 [2024-11-05 16:59:52.772460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.991 [2024-11-05 16:59:52.772465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.991 [2024-11-05 16:59:52.772475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.991 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.782420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.782469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.782479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.782485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.782489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.782499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.792332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.792413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.792423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.792428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.792432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.792444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.802465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.802511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.802521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.802529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.802534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.802544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.812494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.812541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.812551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.812556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.812561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.812571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.822523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.822575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.822584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.822589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.822594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.822604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.832520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.832566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.832576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.832581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.832586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.832596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.842580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.842635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.842645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.842650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.842655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.842671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.852602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.852651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.852660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.852665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.852670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.852680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.862770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.862843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.862854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.862859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.862864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.862874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.872745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.872800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.872809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.872815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.872820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.872830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.882723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.882804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.882814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.882819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.882824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.882835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.892775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.892825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.892835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.892841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.892845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.892856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.902725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.992 [2024-11-05 16:59:52.902784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.992 [2024-11-05 16:59:52.902794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.992 [2024-11-05 16:59:52.902799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.992 [2024-11-05 16:59:52.902804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.992 [2024-11-05 16:59:52.902815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.992 qpair failed and we were unable to recover it. 00:35:45.992 [2024-11-05 16:59:52.912784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.912831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.912841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.912846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.912851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.912861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.922809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.922855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.922865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.922870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.922874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.922885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.932845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.932890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.932900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.932908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.932913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.932923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.942856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.942907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.942917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.942922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.942927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.942936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.952901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.952952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.952962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.952968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.952972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.952982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.962783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.962832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.962842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.962847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.962852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.962862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.972922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.972972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.972981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.972987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.972991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.973004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.982960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.983011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.983021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.983026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.983031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.983041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:52.993067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:52.993125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:52.993134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:52.993139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:52.993144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:52.993154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:53.003039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:53.003115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:53.003125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:53.003130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:53.003135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:53.003145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:53.013050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:53.013097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:53.013108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:53.013113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:53.013118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:53.013128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:53.023080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:53.023127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:53.023137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:53.023142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:53.023146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:53.023157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:53.033121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:53.033167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.993 [2024-11-05 16:59:53.033177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.993 [2024-11-05 16:59:53.033182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.993 [2024-11-05 16:59:53.033186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.993 [2024-11-05 16:59:53.033197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.993 qpair failed and we were unable to recover it. 00:35:45.993 [2024-11-05 16:59:53.043087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.993 [2024-11-05 16:59:53.043129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.994 [2024-11-05 16:59:53.043139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.994 [2024-11-05 16:59:53.043145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.994 [2024-11-05 16:59:53.043149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.994 [2024-11-05 16:59:53.043159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.994 qpair failed and we were unable to recover it. 00:35:45.994 [2024-11-05 16:59:53.053147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:45.994 [2024-11-05 16:59:53.053203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:45.994 [2024-11-05 16:59:53.053213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:45.994 [2024-11-05 16:59:53.053218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:45.994 [2024-11-05 16:59:53.053223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:45.994 [2024-11-05 16:59:53.053233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.994 qpair failed and we were unable to recover it. 00:35:46.255 [2024-11-05 16:59:53.063059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.255 [2024-11-05 16:59:53.063109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.255 [2024-11-05 16:59:53.063123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.255 [2024-11-05 16:59:53.063128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.255 [2024-11-05 16:59:53.063133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.255 [2024-11-05 16:59:53.063144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.255 qpair failed and we were unable to recover it. 00:35:46.255 [2024-11-05 16:59:53.073214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.073265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.073276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.073281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.073286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.073296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.083261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.083339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.083348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.083353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.083358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.083369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.093144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.093187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.093197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.093202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.093207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.093217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.103348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.103400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.103410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.103415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.103423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.103434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.113327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.113374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.113384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.113389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.113394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.113404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.123349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.123396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.123405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.123410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.123415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.123425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.133393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.133476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.133486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.133491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.133496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.133506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.143431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.143487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.143506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.143512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.143517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.143532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.153442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.153494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.153512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.256 [2024-11-05 16:59:53.153519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.256 [2024-11-05 16:59:53.153524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.256 [2024-11-05 16:59:53.153539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.256 qpair failed and we were unable to recover it. 00:35:46.256 [2024-11-05 16:59:53.163457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.256 [2024-11-05 16:59:53.163515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.256 [2024-11-05 16:59:53.163534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.163541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.163547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.163561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.173493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.173560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.173579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.173585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.173590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.173606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.183528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.183579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.183590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.183596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.183601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.183612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.193563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.193612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.193625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.193631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.193636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.193647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.203481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.203534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.203544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.203549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.203554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.203564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.213547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.213595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.213605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.213610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.213615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.213626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.223640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.223691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.223701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.223706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.223711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.223721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.233676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.233724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.233734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.233739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.233751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.233762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.243590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.243645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.243655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.243661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.243665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.243676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.253715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.253771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.253781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.253786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.253791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.257 [2024-11-05 16:59:53.253801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.257 qpair failed and we were unable to recover it. 00:35:46.257 [2024-11-05 16:59:53.263610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.257 [2024-11-05 16:59:53.263692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.257 [2024-11-05 16:59:53.263702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.257 [2024-11-05 16:59:53.263707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.257 [2024-11-05 16:59:53.263711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.258 [2024-11-05 16:59:53.263722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.258 qpair failed and we were unable to recover it. 00:35:46.258 [2024-11-05 16:59:53.273780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.258 [2024-11-05 16:59:53.273862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.258 [2024-11-05 16:59:53.273872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.258 [2024-11-05 16:59:53.273877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.258 [2024-11-05 16:59:53.273882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.258 [2024-11-05 16:59:53.273893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.258 qpair failed and we were unable to recover it. 00:35:46.258 [2024-11-05 16:59:53.283663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.258 [2024-11-05 16:59:53.283705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.258 [2024-11-05 16:59:53.283715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.258 [2024-11-05 16:59:53.283720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.258 [2024-11-05 16:59:53.283725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.258 [2024-11-05 16:59:53.283735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.258 qpair failed and we were unable to recover it. 00:35:46.258 [2024-11-05 16:59:53.293702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.258 [2024-11-05 16:59:53.293754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.258 [2024-11-05 16:59:53.293765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.258 [2024-11-05 16:59:53.293770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.258 [2024-11-05 16:59:53.293774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.258 [2024-11-05 16:59:53.293784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.258 qpair failed and we were unable to recover it. 00:35:46.258 [2024-11-05 16:59:53.303857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.258 [2024-11-05 16:59:53.303908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.258 [2024-11-05 16:59:53.303918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.258 [2024-11-05 16:59:53.303923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.258 [2024-11-05 16:59:53.303928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.258 [2024-11-05 16:59:53.303938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.258 qpair failed and we were unable to recover it. 00:35:46.258 [2024-11-05 16:59:53.313888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.258 [2024-11-05 16:59:53.313938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.258 [2024-11-05 16:59:53.313948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.258 [2024-11-05 16:59:53.313954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.258 [2024-11-05 16:59:53.313958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.258 [2024-11-05 16:59:53.313968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.258 qpair failed and we were unable to recover it. 00:35:46.520 [2024-11-05 16:59:53.323942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.520 [2024-11-05 16:59:53.324018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.520 [2024-11-05 16:59:53.324028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.520 [2024-11-05 16:59:53.324033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.520 [2024-11-05 16:59:53.324038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.520 [2024-11-05 16:59:53.324048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.520 qpair failed and we were unable to recover it. 00:35:46.520 [2024-11-05 16:59:53.333954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.520 [2024-11-05 16:59:53.334003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.520 [2024-11-05 16:59:53.334013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.520 [2024-11-05 16:59:53.334018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.520 [2024-11-05 16:59:53.334023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.520 [2024-11-05 16:59:53.334033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.520 qpair failed and we were unable to recover it. 00:35:46.520 [2024-11-05 16:59:53.343958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.520 [2024-11-05 16:59:53.344008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.520 [2024-11-05 16:59:53.344018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.520 [2024-11-05 16:59:53.344023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.344027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.344037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.353999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.354049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.354059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.354064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.354069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.354079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.364017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.364066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.364075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.364083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.364088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.364098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.374020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.374070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.374080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.374085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.374090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.374100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.384106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.384154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.384164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.384169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.384173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.384183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.394128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.394182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.394191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.394196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.394201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.394211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.404140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.404258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.404269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.404274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.404279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.404292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.414167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.414211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.414220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.414225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.414230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.414240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.424198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.424247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.424257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.424262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.424267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.424277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.434238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.434293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.434303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.434308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.434313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.434323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.444227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.444295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.444304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.444309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.444314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.444324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.454266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.454319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.454329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.454334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.454338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.454348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.464314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.464366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.464375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.464380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.464385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.464395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.474347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.474392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.474402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.474407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.474411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.521 [2024-11-05 16:59:53.474421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.521 qpair failed and we were unable to recover it. 00:35:46.521 [2024-11-05 16:59:53.484333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.521 [2024-11-05 16:59:53.484378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.521 [2024-11-05 16:59:53.484387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.521 [2024-11-05 16:59:53.484392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.521 [2024-11-05 16:59:53.484397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.484406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.494366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.494409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.494418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.494432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.494437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.494447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.504391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.504440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.504450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.504455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.504460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.504469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.514427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.514516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.514535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.514541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.514546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.514560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.524474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.524525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.524543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.524550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.524555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.524569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.534535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.534586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.534605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.534611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.534616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.534634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.544542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.544629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.544640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.544646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.544650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.544662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.554560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.554640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.554650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.554655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.554660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.554670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.564576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.564624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.564634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.564639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.564644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.564654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.522 [2024-11-05 16:59:53.574476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.522 [2024-11-05 16:59:53.574522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.522 [2024-11-05 16:59:53.574532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.522 [2024-11-05 16:59:53.574537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.522 [2024-11-05 16:59:53.574542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.522 [2024-11-05 16:59:53.574552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.522 qpair failed and we were unable to recover it. 00:35:46.784 [2024-11-05 16:59:53.584673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.784 [2024-11-05 16:59:53.584723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.784 [2024-11-05 16:59:53.584733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.784 [2024-11-05 16:59:53.584738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.784 [2024-11-05 16:59:53.584743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.584757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.594678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.594758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.594767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.594773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.594777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.594788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.604697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.604753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.604763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.604768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.604773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.604783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.614714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.614758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.614768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.614773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.614778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.614788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.624775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.624826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.624842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.624847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.624852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.624862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.634776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.634827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.634836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.634842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.634846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.634857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.644789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.644841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.644851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.644856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.644861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.644871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.654808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.654858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.654868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.654873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.654877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.654888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.664727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.664780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.664790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.664795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.664802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.664812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.674903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.674949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.674959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.674964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.674968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.674978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.684771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.684821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.684832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.684837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.684841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.684852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.694944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.694988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.694997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.695002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.695007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.695017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.785 [2024-11-05 16:59:53.704957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.785 [2024-11-05 16:59:53.705022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.785 [2024-11-05 16:59:53.705032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.785 [2024-11-05 16:59:53.705037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.785 [2024-11-05 16:59:53.705041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.785 [2024-11-05 16:59:53.705051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.785 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.715007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.715059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.715068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.715074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.715078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.715088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.725025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.725111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.725120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.725126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.725130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.725140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.734918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.735017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.735027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.735032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.735037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.735047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.744945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.744994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.745004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.745009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.745014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.745024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.755119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.755170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.755182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.755187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.755191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.755201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.765100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.765150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.765160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.765165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.765170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.765180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.775150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.775235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.775244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.775249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.775254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.775264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.785189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.785267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.785276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.785281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.785286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.785296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.795222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.795268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.795277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.795282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.795289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.795299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.805245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.805312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.805322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.805327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.805331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.805341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.815263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.815315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.815324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.815329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.815334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.815344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.825315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.825403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.825412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.825417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.825422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.825431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.835322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.786 [2024-11-05 16:59:53.835371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.786 [2024-11-05 16:59:53.835380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.786 [2024-11-05 16:59:53.835385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.786 [2024-11-05 16:59:53.835390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.786 [2024-11-05 16:59:53.835400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.786 qpair failed and we were unable to recover it. 00:35:46.786 [2024-11-05 16:59:53.845342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:46.787 [2024-11-05 16:59:53.845386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:46.787 [2024-11-05 16:59:53.845396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:46.787 [2024-11-05 16:59:53.845401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:46.787 [2024-11-05 16:59:53.845406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:46.787 [2024-11-05 16:59:53.845416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:46.787 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.855330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.855416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.855426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.855431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.855436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.080 [2024-11-05 16:59:53.855446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.080 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.865404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.865465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.865484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.865490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.865496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.080 [2024-11-05 16:59:53.865510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.080 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.875449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.875499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.875518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.875524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.875529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.080 [2024-11-05 16:59:53.875544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.080 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.885323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.885370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.885381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.885387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.885392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.080 [2024-11-05 16:59:53.885403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.080 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.895482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.895566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.895576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.895581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.895586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.080 [2024-11-05 16:59:53.895596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.080 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.905501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.905554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.905573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.905580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.905585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.080 [2024-11-05 16:59:53.905599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.080 qpair failed and we were unable to recover it. 00:35:47.080 [2024-11-05 16:59:53.915550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.080 [2024-11-05 16:59:53.915644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.080 [2024-11-05 16:59:53.915663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.080 [2024-11-05 16:59:53.915669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.080 [2024-11-05 16:59:53.915674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.915688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.925570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.925620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.925630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.925639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.925644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.925656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.935464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.935513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.935523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.935528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.935533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.935543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.945649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.945732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.945741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.945751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.945756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.945767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.955671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.955726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.955736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.955741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.955749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.955760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.965543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.965587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.965596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.965601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.965606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.965620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.975708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.975763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.975773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.975779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.975783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.975794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.985754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.985805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.985815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.985820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.985825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.985835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:53.995821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:53.995884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:53.995894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:53.995899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:53.995904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:53.995914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:54.005791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:54.005841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:54.005851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:54.005856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:54.005861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:54.005871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:54.015698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:54.015792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:54.015802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:54.015807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:54.015812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:54.015822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:54.025932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:54.026038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:54.026048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:54.026053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.081 [2024-11-05 16:59:54.026058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.081 [2024-11-05 16:59:54.026068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.081 qpair failed and we were unable to recover it. 00:35:47.081 [2024-11-05 16:59:54.035831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.081 [2024-11-05 16:59:54.035913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.081 [2024-11-05 16:59:54.035923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.081 [2024-11-05 16:59:54.035928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.035933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.035943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.045892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.045941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.045951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.045956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.045961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.045972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.055949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.055995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.056007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.056012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.056017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.056027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.065836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.065888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.065897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.065902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.065907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.065917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.075993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.076039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.076048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.076054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.076058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.076069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.086033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.086081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.086092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.086097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.086101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.086111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.096049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.096132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.096142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.096148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.096153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.096166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.106086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.106135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.106145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.106150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.106155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.106165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.116088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.116138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.116148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.116153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.116158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.116168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.082 [2024-11-05 16:59:54.126128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.082 [2024-11-05 16:59:54.126178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.082 [2024-11-05 16:59:54.126188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.082 [2024-11-05 16:59:54.126193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.082 [2024-11-05 16:59:54.126198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.082 [2024-11-05 16:59:54.126208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.082 qpair failed and we were unable to recover it. 00:35:47.400 [2024-11-05 16:59:54.136162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.400 [2024-11-05 16:59:54.136210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.400 [2024-11-05 16:59:54.136220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.400 [2024-11-05 16:59:54.136225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.400 [2024-11-05 16:59:54.136230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.400 [2024-11-05 16:59:54.136240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.400 qpair failed and we were unable to recover it. 00:35:47.400 [2024-11-05 16:59:54.146075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.400 [2024-11-05 16:59:54.146126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.400 [2024-11-05 16:59:54.146136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.400 [2024-11-05 16:59:54.146141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.400 [2024-11-05 16:59:54.146146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.400 [2024-11-05 16:59:54.146157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.400 qpair failed and we were unable to recover it. 00:35:47.400 [2024-11-05 16:59:54.156215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.400 [2024-11-05 16:59:54.156266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.400 [2024-11-05 16:59:54.156277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.400 [2024-11-05 16:59:54.156284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.400 [2024-11-05 16:59:54.156290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.400 [2024-11-05 16:59:54.156302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.166235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.166285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.166295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.166300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.166305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.166318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.176253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.176310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.176320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.176325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.176330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.176340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.186172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.186230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.186242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.186248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.186253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.186263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.196342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.196393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.196403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.196408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.196413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.196423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.206238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.206289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.206299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.206304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.206309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.206320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.216384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.216462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.216472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.216477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.216482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.216492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.226412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.226471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.226481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.226486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.226493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.226503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.236447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.236495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.236505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.236510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.236515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.236525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.246475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.246518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.246527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.246533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.246537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.246547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.256349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.256398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.256407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.256413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.256418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.256428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.266523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.266571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.266581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.266586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.266591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.266601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.276557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.276609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.276628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.276634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.276639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.276654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.401 [2024-11-05 16:59:54.286563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.401 [2024-11-05 16:59:54.286619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.401 [2024-11-05 16:59:54.286630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.401 [2024-11-05 16:59:54.286635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.401 [2024-11-05 16:59:54.286640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.401 [2024-11-05 16:59:54.286651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.401 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.296600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.296645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.296656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.296660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.296665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.296676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.306642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.306693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.306703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.306708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.306713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.306723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.316719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.316772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.316786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.316791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.316795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.316806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.326581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.326630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.326640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.326645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.326650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.326660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.336705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.336785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.336795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.336801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.336805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.336816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.346626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.346678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.346688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.346693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.346698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.346709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.356769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.356822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.356832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.356843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.356848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.356858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.366785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.366842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.366852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.366857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.366862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.366872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.376825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.376877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.376887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.376892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.376897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.376907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.386849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.386900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.386909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.386914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.386919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.386929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.396899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.396952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.396962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.396967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.396972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.396982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.406895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.406943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.406952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.406958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.406962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.406973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.416966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.402 [2024-11-05 16:59:54.417011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.402 [2024-11-05 16:59:54.417020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.402 [2024-11-05 16:59:54.417025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.402 [2024-11-05 16:59:54.417030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.402 [2024-11-05 16:59:54.417040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.402 qpair failed and we were unable to recover it. 00:35:47.402 [2024-11-05 16:59:54.426939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.403 [2024-11-05 16:59:54.426985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.403 [2024-11-05 16:59:54.426995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.403 [2024-11-05 16:59:54.427000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.403 [2024-11-05 16:59:54.427005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.403 [2024-11-05 16:59:54.427015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.403 qpair failed and we were unable to recover it. 00:35:47.403 [2024-11-05 16:59:54.436986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.403 [2024-11-05 16:59:54.437034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.403 [2024-11-05 16:59:54.437044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.403 [2024-11-05 16:59:54.437049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.403 [2024-11-05 16:59:54.437053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.403 [2024-11-05 16:59:54.437063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.403 qpair failed and we were unable to recover it. 00:35:47.403 [2024-11-05 16:59:54.447018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.403 [2024-11-05 16:59:54.447072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.403 [2024-11-05 16:59:54.447082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.403 [2024-11-05 16:59:54.447087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.403 [2024-11-05 16:59:54.447092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.403 [2024-11-05 16:59:54.447102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.403 qpair failed and we were unable to recover it. 00:35:47.403 [2024-11-05 16:59:54.457023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.403 [2024-11-05 16:59:54.457110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.403 [2024-11-05 16:59:54.457120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.403 [2024-11-05 16:59:54.457125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.403 [2024-11-05 16:59:54.457131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.403 [2024-11-05 16:59:54.457141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.403 qpair failed and we were unable to recover it. 00:35:47.686 [2024-11-05 16:59:54.466943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.686 [2024-11-05 16:59:54.466993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.686 [2024-11-05 16:59:54.467004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.686 [2024-11-05 16:59:54.467009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.686 [2024-11-05 16:59:54.467014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.686 [2024-11-05 16:59:54.467025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.686 qpair failed and we were unable to recover it. 00:35:47.686 [2024-11-05 16:59:54.477082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.686 [2024-11-05 16:59:54.477174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.686 [2024-11-05 16:59:54.477184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.686 [2024-11-05 16:59:54.477190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.686 [2024-11-05 16:59:54.477195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.686 [2024-11-05 16:59:54.477205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.686 qpair failed and we were unable to recover it. 00:35:47.686 [2024-11-05 16:59:54.487022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.686 [2024-11-05 16:59:54.487073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.487083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.487091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.487095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.487106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.497171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.497234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.497243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.497249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.497253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.497263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.507221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.507279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.507289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.507294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.507299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.507309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.517210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.517262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.517271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.517277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.517281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.517291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.527194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.527244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.527253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.527259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.527263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.527276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.537249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.537302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.537312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.537317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.537322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.537332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.547272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.547321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.547330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.547336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.547340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.547351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.557324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.557374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.557383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.557389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.557394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.557404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.567333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.567408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.567418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.567423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.567428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.567438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.577351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.577403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.577413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.577419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.577423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.577433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.587404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.587456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.587465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.587471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.587475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.587485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.597305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.597355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.597365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.597370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.597375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.597385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.607448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.607493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.607503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.607508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.607513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.607523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.617462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.617514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.617536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.617543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.617548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.617563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.627381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.627432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.627446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.627451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.627456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.627468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.637540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.637592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.637611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.637617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.637622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.637636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.647440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.647489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.647500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.647505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.647510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.647522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.657580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.657632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.657642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.657647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.657652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.657666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.667637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.667685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.667695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.667700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.667705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.667716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.677654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.677704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.677714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.677720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.677724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.677735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.687540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.687584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.687594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.687599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.687603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.687614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.697683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.697727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.697737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.697742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.697750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.697761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.707695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.707753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.707764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.707769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.707773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.707784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.717735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.717794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.717819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.717824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.717828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.687 [2024-11-05 16:59:54.717846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.687 qpair failed and we were unable to recover it. 00:35:47.687 [2024-11-05 16:59:54.727656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.687 [2024-11-05 16:59:54.727703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.687 [2024-11-05 16:59:54.727714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.687 [2024-11-05 16:59:54.727719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.687 [2024-11-05 16:59:54.727724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.688 [2024-11-05 16:59:54.727735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.688 qpair failed and we were unable to recover it. 00:35:47.688 [2024-11-05 16:59:54.737679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.688 [2024-11-05 16:59:54.737723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.688 [2024-11-05 16:59:54.737734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.688 [2024-11-05 16:59:54.737739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.688 [2024-11-05 16:59:54.737744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.688 [2024-11-05 16:59:54.737759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.688 qpair failed and we were unable to recover it. 00:35:47.688 [2024-11-05 16:59:54.747858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.688 [2024-11-05 16:59:54.747907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.688 [2024-11-05 16:59:54.747919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.688 [2024-11-05 16:59:54.747924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.688 [2024-11-05 16:59:54.747929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.688 [2024-11-05 16:59:54.747940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.688 qpair failed and we were unable to recover it. 00:35:47.979 [2024-11-05 16:59:54.757853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.979 [2024-11-05 16:59:54.757913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.979 [2024-11-05 16:59:54.757923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.979 [2024-11-05 16:59:54.757928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.979 [2024-11-05 16:59:54.757933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.979 [2024-11-05 16:59:54.757943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.767884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.767929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.767939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.767944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.767950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.767960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.777819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.777865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.777877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.777882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.777887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.777897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.787983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.788067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.788077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.788083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.788092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.788102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.797986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.798037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.798047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.798053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.798058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.798068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.808003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.808046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.808056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.808062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.808066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.808077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.818036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.818080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.818090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.818095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.818100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.818111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.828064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.828111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.828120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.828126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.828131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.828141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.838098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.838148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.838158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.838164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.838168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.838179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.848127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.848198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.848207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.848212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.848217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.848227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.858117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.858176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.858187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.858192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.858197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.858207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.868294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.868349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.868359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.868364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.868368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.868378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.878274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.878372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.878384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.878389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.878394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.878405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.888286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.888378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.888388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.888394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.888399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.888409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.898295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.898344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.898354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.898359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.898363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.898374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.908311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.908368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.908378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.908383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.908387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.908397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.918337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.918387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.918397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.918404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.918409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.918419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.928328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.928373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.928382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.928388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.928392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.928402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.938373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.938423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.938433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.938438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.938442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.938452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.948280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.948329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.948338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.980 [2024-11-05 16:59:54.948344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.980 [2024-11-05 16:59:54.948348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.980 [2024-11-05 16:59:54.948358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.980 qpair failed and we were unable to recover it. 00:35:47.980 [2024-11-05 16:59:54.958451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.980 [2024-11-05 16:59:54.958498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.980 [2024-11-05 16:59:54.958507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:54.958512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:54.958517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:54.958527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:54.968330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:54.968378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:54.968389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:54.968394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:54.968399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:54.968410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:54.978493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:54.978563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:54.978573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:54.978578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:54.978583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:54.978593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:54.988485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:54.988538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:54.988557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:54.988563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:54.988568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:54.988583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:54.998560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:54.998610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:54.998629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:54.998635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:54.998640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:54.998654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:55.008442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:55.008503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:55.008522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:55.008528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:55.008533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:55.008548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:55.018652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:55.018703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:55.018722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:55.018729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:55.018734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:55.018753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:47.981 [2024-11-05 16:59:55.028593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:47.981 [2024-11-05 16:59:55.028642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:47.981 [2024-11-05 16:59:55.028654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:47.981 [2024-11-05 16:59:55.028659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:47.981 [2024-11-05 16:59:55.028664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:47.981 [2024-11-05 16:59:55.028675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:47.981 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.038660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.038714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.038724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.038729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.038734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.038749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.048684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.048731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.048741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.048753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.048758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.048769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.058572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.058615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.058624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.058630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.058634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.058645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.068740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.068825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.068835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.068841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.068846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.068857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.078770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.078823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.078833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.078839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.078843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.078853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.088777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.088847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.088858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.088863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.088868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.088881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.098809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.098861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.098871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.098877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.098882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.098892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.108847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.108896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.108906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.108912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.108917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.108928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.118886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.118943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.118953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.118958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.118963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.118973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.128895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.128941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.128951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.128956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.128961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.128972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.138920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.138966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.138976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.138981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.138986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.138996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.148970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.149019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.149029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.149035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.149039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.149050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.158855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.158910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.158920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.158925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.158929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.158940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.168894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.168940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.168949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.168955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.168959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.168970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.179003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.179057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.179069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.179074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.179079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.179089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.189069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.189120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.189130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.189135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.189140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.189150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.198980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.199035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.199045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.199050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.199054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.199064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.208993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.209085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.209095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.209101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.209106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.209116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-11-05 16:59:55.219141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.242 [2024-11-05 16:59:55.219236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.242 [2024-11-05 16:59:55.219246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.242 [2024-11-05 16:59:55.219251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.242 [2024-11-05 16:59:55.219259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.242 [2024-11-05 16:59:55.219269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.229177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.229233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.229244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.229249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.229254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.229264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.239213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.239263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.239274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.239279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.239284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.239294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.249234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.249285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.249294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.249299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.249304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.249314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.259261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.259308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.259318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.259323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.259328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.259338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.269341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.269395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.269405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.269410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.269415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.269425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.279222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.279316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.279325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.279331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.279336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.279346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.289379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.289446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.289456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.289461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.289466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.289475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-11-05 16:59:55.299367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.243 [2024-11-05 16:59:55.299455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.243 [2024-11-05 16:59:55.299465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.243 [2024-11-05 16:59:55.299472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.243 [2024-11-05 16:59:55.299476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.243 [2024-11-05 16:59:55.299487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.309397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.309449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.309469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.309474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.309479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.309493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.319481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.319532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.319542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.319547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.319552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.319562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.329460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.329515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.329524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.329529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.329534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.329543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.339486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.339537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.339546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.339552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.339556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.339566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.349507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.349557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.349567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.349572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.349579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.349590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.359595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.359665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.359675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.359680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.359684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.359694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.369610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.369656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.369666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.369671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.369675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.369685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.379594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.379642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.379652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.379658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.379662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.379672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.389634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.389683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.389693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.389698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.389702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.389712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.399683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.399732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.399742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.399750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.399754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.399765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.409685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.504 [2024-11-05 16:59:55.409730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.504 [2024-11-05 16:59:55.409740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.504 [2024-11-05 16:59:55.409750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.504 [2024-11-05 16:59:55.409754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.504 [2024-11-05 16:59:55.409765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.504 qpair failed and we were unable to recover it. 00:35:48.504 [2024-11-05 16:59:55.419608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.419655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.419665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.419670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.419675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.419686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.429751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.429803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.429813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.429818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.429823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.429833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.439796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.439842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.439854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.439859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.439864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.439874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.449802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.449851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.449860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.449866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.449871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.449881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.459698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.459744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.459758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.459763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.459768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.459779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.469841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.469914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.469924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.469929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.469934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.469944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.479909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.479959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.479969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.479977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.479981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.479992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.489793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.489837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.489847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.489852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.489857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.489867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.499939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.499986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.499996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.500001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.500006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.500016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.509859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.509912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.509923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.509928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.509933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.509944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.520009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.520105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.520116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.520121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.520126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.520137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.530022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.530069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.530079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.530084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.530089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.530099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.540050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.540099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.505 [2024-11-05 16:59:55.540109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.505 [2024-11-05 16:59:55.540114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.505 [2024-11-05 16:59:55.540119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.505 [2024-11-05 16:59:55.540129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.505 qpair failed and we were unable to recover it. 00:35:48.505 [2024-11-05 16:59:55.550075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.505 [2024-11-05 16:59:55.550121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.506 [2024-11-05 16:59:55.550131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.506 [2024-11-05 16:59:55.550136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.506 [2024-11-05 16:59:55.550141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.506 [2024-11-05 16:59:55.550151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.506 qpair failed and we were unable to recover it. 00:35:48.506 [2024-11-05 16:59:55.560026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.506 [2024-11-05 16:59:55.560075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.506 [2024-11-05 16:59:55.560084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.506 [2024-11-05 16:59:55.560089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.506 [2024-11-05 16:59:55.560094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.506 [2024-11-05 16:59:55.560103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.506 qpair failed and we were unable to recover it. 00:35:48.767 [2024-11-05 16:59:55.570123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.767 [2024-11-05 16:59:55.570177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.767 [2024-11-05 16:59:55.570187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.767 [2024-11-05 16:59:55.570192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.767 [2024-11-05 16:59:55.570197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.767 [2024-11-05 16:59:55.570206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.767 qpair failed and we were unable to recover it. 00:35:48.767 [2024-11-05 16:59:55.580174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.767 [2024-11-05 16:59:55.580238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.767 [2024-11-05 16:59:55.580248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.767 [2024-11-05 16:59:55.580253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.767 [2024-11-05 16:59:55.580257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.767 [2024-11-05 16:59:55.580267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.767 qpair failed and we were unable to recover it. 00:35:48.767 [2024-11-05 16:59:55.590203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.767 [2024-11-05 16:59:55.590253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.767 [2024-11-05 16:59:55.590263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.767 [2024-11-05 16:59:55.590268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.590273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.590283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.600107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.600157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.600167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.600172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.600177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.600187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.610262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.610309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.610319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.610327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.610331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.610342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.620285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.620332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.620342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.620347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.620352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.620362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.630318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.630366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.630375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.630380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.630385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.630395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.640384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.640467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.640477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.640482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.640487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.640497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.650347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.650430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.650439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.650444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.650450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.650462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.660265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.660316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.660326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.660331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.660335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.660345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.670436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.670485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.670495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.670500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.670504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.670514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.680448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.680496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.680506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.768 [2024-11-05 16:59:55.680511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.768 [2024-11-05 16:59:55.680516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.768 [2024-11-05 16:59:55.680526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.768 qpair failed and we were unable to recover it. 00:35:48.768 [2024-11-05 16:59:55.690488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.768 [2024-11-05 16:59:55.690539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.768 [2024-11-05 16:59:55.690549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.690554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.690559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.690569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.700507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.700558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.700568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.700573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.700578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.700588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.710529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.710601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.710610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.710616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.710620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.710631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.720554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.720605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.720614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.720620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.720625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.720635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.730568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.730654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.730663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.730669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.730674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.730685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.740604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.740645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.740657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.740662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.740667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.740677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.750590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.750632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.750642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.750646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.750651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.750661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.760621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.760684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.760693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.760698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.760703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.760713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.770648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.770707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.770717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.770722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.770727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.770736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.780694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.780735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.780748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.780754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-11-05 16:59:55.780761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.769 [2024-11-05 16:59:55.780771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-11-05 16:59:55.790693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-11-05 16:59:55.790736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-11-05 16:59:55.790751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-11-05 16:59:55.790756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-11-05 16:59:55.790762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.770 [2024-11-05 16:59:55.790772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-11-05 16:59:55.800749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-11-05 16:59:55.800803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-11-05 16:59:55.800812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-11-05 16:59:55.800817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-11-05 16:59:55.800822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.770 [2024-11-05 16:59:55.800832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-11-05 16:59:55.810768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-11-05 16:59:55.810821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-11-05 16:59:55.810831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-11-05 16:59:55.810836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-11-05 16:59:55.810841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.770 [2024-11-05 16:59:55.810851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-11-05 16:59:55.820802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-11-05 16:59:55.820844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-11-05 16:59:55.820853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-11-05 16:59:55.820859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-11-05 16:59:55.820864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:48.770 [2024-11-05 16:59:55.820874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.770 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.830806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.830850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.830860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.830865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.830870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.830880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.840852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.840927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.840937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.840942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.840947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.840957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.850924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.850980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.850989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.850995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.850999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.851010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.860921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.860965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.860975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.860980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.860985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.860995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.870892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.870934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.870946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.870951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.870956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.870966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.881005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.881050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.881059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.881065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.881069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.881080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.890904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.890947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.890958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.890963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.890968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.890978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.901035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.901077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.901086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.901092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.901096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.901106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.911024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.911068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.911077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.911083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.911090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.911101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.920959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.921005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.921015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.921020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.921025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.921035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.931081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.931123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.931133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.931138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.931143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.931153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.941137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.941233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.941243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-11-05 16:59:55.941248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-11-05 16:59:55.941253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.032 [2024-11-05 16:59:55.941263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-11-05 16:59:55.951041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-11-05 16:59:55.951080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-11-05 16:59:55.951089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:55.951094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:55.951099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:55.951109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:55.961196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:55.961245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:55.961255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:55.961260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:55.961264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:55.961274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:55.971224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:55.971267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:55.971276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:55.971281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:55.971286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:55.971295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:55.981220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:55.981261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:55.981270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:55.981275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:55.981280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:55.981290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:55.991242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:55.991281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:55.991290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:55.991295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:55.991300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:55.991309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.001311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.001362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.001377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.001382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.001386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.001396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.011321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.011366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.011376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.011381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.011386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.011396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.021355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.021399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.021408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.021414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.021418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.021428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.031343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.031388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.031397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.031402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.031407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.031416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.041416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.041462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.041471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.041479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.041483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.041494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.051459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.051500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.051510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.051515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.051520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.051530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.061446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.061499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.061518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.061524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.061530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.061544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.071327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.071369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.071380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.071385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.071390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.033 [2024-11-05 16:59:56.071402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-11-05 16:59:56.081495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-11-05 16:59:56.081543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-11-05 16:59:56.081553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-11-05 16:59:56.081558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-11-05 16:59:56.081563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.034 [2024-11-05 16:59:56.081578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.034 qpair failed and we were unable to recover it. 00:35:49.034 [2024-11-05 16:59:56.091412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.034 [2024-11-05 16:59:56.091469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.034 [2024-11-05 16:59:56.091482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.034 [2024-11-05 16:59:56.091488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.034 [2024-11-05 16:59:56.091492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.034 [2024-11-05 16:59:56.091504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.034 qpair failed and we were unable to recover it. 00:35:49.295 [2024-11-05 16:59:56.101466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.295 [2024-11-05 16:59:56.101566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.295 [2024-11-05 16:59:56.101585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.295 [2024-11-05 16:59:56.101592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.295 [2024-11-05 16:59:56.101598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.295 [2024-11-05 16:59:56.101613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.295 qpair failed and we were unable to recover it. 00:35:49.295 [2024-11-05 16:59:56.111528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.295 [2024-11-05 16:59:56.111571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.295 [2024-11-05 16:59:56.111582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.295 [2024-11-05 16:59:56.111588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.295 [2024-11-05 16:59:56.111593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.295 [2024-11-05 16:59:56.111605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.295 qpair failed and we were unable to recover it. 00:35:49.295 [2024-11-05 16:59:56.121641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.295 [2024-11-05 16:59:56.121693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.295 [2024-11-05 16:59:56.121704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.295 [2024-11-05 16:59:56.121710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.295 [2024-11-05 16:59:56.121715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.295 [2024-11-05 16:59:56.121725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.295 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.131539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.131585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.131597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.131602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.131607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.131618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.141692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.141734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.141744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.141754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.141758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.141769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.151669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.151715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.151725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.151730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.151735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.151749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.161667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.161764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.161774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.161779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.161784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.161794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.171778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.171869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.171879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.171887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.171893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.171905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.181796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.181837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.181847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.181853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.181857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.181869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.191808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.191853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.191863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.191868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.191873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.191883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.201858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.201910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.201920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.201926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.201931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.201941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.211871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.211915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.211925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.211930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.211935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.211948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.221809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.221854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.221864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.221869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.221874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.221885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.231881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.231921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.231931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.231937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.231941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.231952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.241941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.241987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.241997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.242003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.242008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.242018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.251850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.296 [2024-11-05 16:59:56.251893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.296 [2024-11-05 16:59:56.251903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.296 [2024-11-05 16:59:56.251908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.296 [2024-11-05 16:59:56.251912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.296 [2024-11-05 16:59:56.251923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.296 qpair failed and we were unable to recover it. 00:35:49.296 [2024-11-05 16:59:56.262051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.262099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.262109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.262114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.262119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.262129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.271989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.272057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.272067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.272072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.272077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.272087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.282107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.282191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.282201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.282206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.282211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.282221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.292096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.292139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.292149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.292154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.292159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.292169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.302108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.302152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.302164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.302169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.302174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.302184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.312084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.312130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.312140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.312145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.312150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.312160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.322181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.322273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.322283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.322289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.322293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.322303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.332172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.332220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.332230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.332236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.332240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.332250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.342247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.342339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.342349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.342354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.342361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.342372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.297 [2024-11-05 16:59:56.352217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.297 [2024-11-05 16:59:56.352261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.297 [2024-11-05 16:59:56.352270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.297 [2024-11-05 16:59:56.352275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.297 [2024-11-05 16:59:56.352280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.297 [2024-11-05 16:59:56.352290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.297 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-05 16:59:56.362282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.558 [2024-11-05 16:59:56.362332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.558 [2024-11-05 16:59:56.362341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.558 [2024-11-05 16:59:56.362347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.558 [2024-11-05 16:59:56.362351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.558 [2024-11-05 16:59:56.362361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-05 16:59:56.372363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.558 [2024-11-05 16:59:56.372435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.558 [2024-11-05 16:59:56.372445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.558 [2024-11-05 16:59:56.372450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.372455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.372465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.382326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.382368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.382378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.382383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.382388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.382398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.392329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.392371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.392381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.392386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.392391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.392401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.402451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.402501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.402510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.402515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.402520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.402530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.412405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.412452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.412470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.412477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.412482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.412496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.422448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.422538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.422556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.422563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.422569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.422584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.432379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.432454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.432476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.432484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.432489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.432503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.442515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.442607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.442626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.442633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.442638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.442652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.452543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.452620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.452632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.452637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.452642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.452653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.462505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.462593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.462603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.462609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.462613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.462624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.472402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.472466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.472476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.472481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.472489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.472499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.482603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.482649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.482659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.482664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.482669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.482679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.492630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.492675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.559 [2024-11-05 16:59:56.492684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.559 [2024-11-05 16:59:56.492689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.559 [2024-11-05 16:59:56.492694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.559 [2024-11-05 16:59:56.492705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-05 16:59:56.502629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.559 [2024-11-05 16:59:56.502709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.502719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.502724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.502728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.502739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.512643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.512684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.512694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.512699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.512704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.512714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.522776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.522854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.522864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.522869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.522874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.522884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.532598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.532639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.532648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.532654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.532659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.532669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.542759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.542801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.542811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.542816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.542821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.542831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.552624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.552665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.552674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.552680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.552684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.552694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.562819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.562870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.562882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.562887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.562892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.562903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.572829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.572867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.572877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.572882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.572887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.572897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.582917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.582962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.582972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.582977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.582982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.582992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.592826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.592870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.592879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.592884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.592889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.592899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.602894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.602940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.602950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.602957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.602962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.602972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-05 16:59:56.613037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.560 [2024-11-05 16:59:56.613120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.560 [2024-11-05 16:59:56.613129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.560 [2024-11-05 16:59:56.613134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.560 [2024-11-05 16:59:56.613139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.560 [2024-11-05 16:59:56.613148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.622942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.623009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.623019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.623023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.623028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.623038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.632958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.632999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.633008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.633013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.633018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.633028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.643047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.643098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.643108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.643113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.643117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.643130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.653048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.653090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.653099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.653104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.653109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.653119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.663076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.663117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.663126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.663132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.663136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.663146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.673063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.673103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.673112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.673117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.673122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.673132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.683154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.683207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.683216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.683221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.683226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.683236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.693130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.693174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.693183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.693188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.693193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.693203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.703159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.703204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.703213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.703218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.703223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.703233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.713144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.713189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.713199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.713204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.713208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.713219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.723112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.723158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.723167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.723172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.723177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.723187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.733211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.822 [2024-11-05 16:59:56.733260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.822 [2024-11-05 16:59:56.733270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.822 [2024-11-05 16:59:56.733281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.822 [2024-11-05 16:59:56.733286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.822 [2024-11-05 16:59:56.733295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.822 qpair failed and we were unable to recover it. 00:35:49.822 [2024-11-05 16:59:56.743283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.743320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.743330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.743335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.743339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.743349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.753139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.753182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.753191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.753197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.753201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.753211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.763372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.763417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.763426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.763432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.763436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.763446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.773286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.773324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.773333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.773339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.773343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.773356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.783368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.783409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.783424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.783430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.783435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.783447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.793397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.793478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.793488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.793493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.793498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.793508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.803459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.803509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.803519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.803524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.803529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.803539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.813311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.813368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.813378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.813383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.813388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.813398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.823475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.823519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.823528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.823533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.823538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.823547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.833493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.833537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.833555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.833562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.833567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.833581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.843577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.843624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.843643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.843650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.843655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.843669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.853413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.853470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.853481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.853487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.853491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.853502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.823 qpair failed and we were unable to recover it. 00:35:49.823 [2024-11-05 16:59:56.863589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.823 [2024-11-05 16:59:56.863639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.823 [2024-11-05 16:59:56.863661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.823 [2024-11-05 16:59:56.863668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.823 [2024-11-05 16:59:56.863673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.823 [2024-11-05 16:59:56.863687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-05 16:59:56.873590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.824 [2024-11-05 16:59:56.873631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.824 [2024-11-05 16:59:56.873642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.824 [2024-11-05 16:59:56.873648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.824 [2024-11-05 16:59:56.873653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.824 [2024-11-05 16:59:56.873664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.824 qpair failed and we were unable to recover it. 00:35:49.824 [2024-11-05 16:59:56.883681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.824 [2024-11-05 16:59:56.883728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.824 [2024-11-05 16:59:56.883738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.824 [2024-11-05 16:59:56.883743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.824 [2024-11-05 16:59:56.883751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:49.824 [2024-11-05 16:59:56.883762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:49.824 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.893644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.893684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.893694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.893699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.893704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.893714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.085 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.903572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.903614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.903624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.903629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.903636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.903647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.085 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.913723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.913792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.913802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.913808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.913812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.913823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.085 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.923757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.923799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.923809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.923814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.923819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.923829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.085 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.933754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.933798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.933807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.933812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.933817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.933827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.085 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.943771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.943823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.943833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.943838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.943842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.943853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.085 qpair failed and we were unable to recover it. 00:35:50.085 [2024-11-05 16:59:56.953807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.085 [2024-11-05 16:59:56.953866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.085 [2024-11-05 16:59:56.953876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.085 [2024-11-05 16:59:56.953881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.085 [2024-11-05 16:59:56.953886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.085 [2024-11-05 16:59:56.953896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:56.963855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:56.963931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:56.963940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:56.963945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:56.963950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:56.963960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:56.973731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:56.973777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:56.973787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:56.973792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:56.973797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:56.973808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:56.983948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:56.984006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:56.984015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:56.984020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:56.984025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:56.984035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:56.993940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:56.993978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:56.993990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:56.993995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:56.994000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:56.994010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.003853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.003899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.003909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.003914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.003919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.003929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.013968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.014012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.014022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.014027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.014032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.014042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.023999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.024072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.024083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.024088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.024093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.024104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.033889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.033928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.033938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.033943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.033950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.033961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.044111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.044159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.044169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.044174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.044178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.044189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.054075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.054120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.054130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.054135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.054139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.054149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.064182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.064228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.064238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.064243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.064248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.086 [2024-11-05 16:59:57.064258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.086 qpair failed and we were unable to recover it. 00:35:50.086 [2024-11-05 16:59:57.074045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.086 [2024-11-05 16:59:57.074087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.086 [2024-11-05 16:59:57.074096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.086 [2024-11-05 16:59:57.074102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.086 [2024-11-05 16:59:57.074106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.074116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.084214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.084260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.084270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.084275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.084280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.084290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.094209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.094262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.094272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.094277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.094281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.094292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.104209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.104256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.104266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.104271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.104276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.104286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.114211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.114255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.114265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.114270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.114275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.114285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.124312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.124356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.124368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.124373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.124378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.124388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.134142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.134181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.134191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.134196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.134201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.134211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.087 [2024-11-05 16:59:57.144311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.087 [2024-11-05 16:59:57.144350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.087 [2024-11-05 16:59:57.144359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.087 [2024-11-05 16:59:57.144364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.087 [2024-11-05 16:59:57.144369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.087 [2024-11-05 16:59:57.144379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.087 qpair failed and we were unable to recover it. 00:35:50.348 [2024-11-05 16:59:57.154205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.348 [2024-11-05 16:59:57.154246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.348 [2024-11-05 16:59:57.154256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.348 [2024-11-05 16:59:57.154261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.348 [2024-11-05 16:59:57.154266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.348 [2024-11-05 16:59:57.154276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.348 qpair failed and we were unable to recover it. 00:35:50.348 [2024-11-05 16:59:57.164413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.348 [2024-11-05 16:59:57.164477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.348 [2024-11-05 16:59:57.164487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.348 [2024-11-05 16:59:57.164495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.348 [2024-11-05 16:59:57.164499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.348 [2024-11-05 16:59:57.164509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.348 qpair failed and we were unable to recover it. 00:35:50.348 [2024-11-05 16:59:57.174251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.348 [2024-11-05 16:59:57.174288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.348 [2024-11-05 16:59:57.174298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.348 [2024-11-05 16:59:57.174304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.348 [2024-11-05 16:59:57.174308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.348 [2024-11-05 16:59:57.174319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.348 qpair failed and we were unable to recover it. 00:35:50.348 [2024-11-05 16:59:57.184416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.348 [2024-11-05 16:59:57.184503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.348 [2024-11-05 16:59:57.184513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.348 [2024-11-05 16:59:57.184519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.348 [2024-11-05 16:59:57.184524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.184535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.194458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.194500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.194510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.194515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.194519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.194530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.204481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.204527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.204537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.204542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.204547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.204560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.214354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.214391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.214401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.214406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.214410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.214420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.224409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.224452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.224462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.224467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.224471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.224481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.234549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.234591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.234601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.234606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.234610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.234620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.244626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.244673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.244683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.244688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.244693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.244703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.254621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.254665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.254675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.254680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.254685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.254695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.264563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.264606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.264615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.264620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.264625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.264635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.274663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.274710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.274720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.274725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.274730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.274740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.284732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.284781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.284791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.284796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.284801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.284812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.294681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.294719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.294729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.294736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.294741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.349 [2024-11-05 16:59:57.294756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.349 qpair failed and we were unable to recover it. 00:35:50.349 [2024-11-05 16:59:57.304742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.349 [2024-11-05 16:59:57.304801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.349 [2024-11-05 16:59:57.304810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.349 [2024-11-05 16:59:57.304815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.349 [2024-11-05 16:59:57.304820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.304831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.314740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.314783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.314793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.314799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.314804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.314814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.324813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.324873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.324883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.324888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.324893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.324903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.334820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.334858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.334868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.334873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.334878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.334891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.344841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.344907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.344917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.344922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.344927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.344937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.354752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.354794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.354804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.354809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.354814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.354825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.364972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.365023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.365032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.365038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.365042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.365052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.374919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.374962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.374972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.374977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.374981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.374991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.384814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.384851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.384860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.384865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.384870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.384880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.394998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.395042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.395052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.395057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.395062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.395072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.350 [2024-11-05 16:59:57.405037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.350 [2024-11-05 16:59:57.405081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.350 [2024-11-05 16:59:57.405090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.350 [2024-11-05 16:59:57.405095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.350 [2024-11-05 16:59:57.405100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.350 [2024-11-05 16:59:57.405109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.350 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.415027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.415062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.415071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.415077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.415081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.415092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.425057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.425098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.425113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.425118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.425122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.425133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.435101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.435144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.435154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.435158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.435163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.435173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.445169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.445218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.445227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.445233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.445237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.445247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.455119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.455158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.455167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.455172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.455177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.455187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.465143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.465180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.465189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.465195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.465202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.465212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.475188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.475240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.475249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.475254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.475259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.475269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.485144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.485188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.485198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.485204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.485209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.485219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-05 16:59:57.495134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.613 [2024-11-05 16:59:57.495178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.613 [2024-11-05 16:59:57.495188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.613 [2024-11-05 16:59:57.495193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.613 [2024-11-05 16:59:57.495198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.613 [2024-11-05 16:59:57.495209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.505257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.505302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.505311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.505316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.505321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.505331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.515320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.515362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.515372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.515378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.515383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.515393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.525389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.525436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.525446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.525451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.525456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.525466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.535227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.535268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.535277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.535282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.535287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.535298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.545391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.545447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.545456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.545462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.545466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.545476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.555414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.555456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.555468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.555473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.555478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.555488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.565485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.565533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.565542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.565547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.565552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.565562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.575494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.575537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.575546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.575551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.575556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.575566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.585379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.585421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.585431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.585436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.585441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.585450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.595536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.595577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.595586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.595592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.595599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.595609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.605644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.605731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.605761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.605768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.605774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.605789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.615589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.615661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.615672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.615678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.615682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.614 [2024-11-05 16:59:57.615694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-05 16:59:57.625473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.614 [2024-11-05 16:59:57.625511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.614 [2024-11-05 16:59:57.625521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.614 [2024-11-05 16:59:57.625527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.614 [2024-11-05 16:59:57.625532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.615 [2024-11-05 16:59:57.625542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-05 16:59:57.635620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.615 [2024-11-05 16:59:57.635663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.615 [2024-11-05 16:59:57.635672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.615 [2024-11-05 16:59:57.635678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.615 [2024-11-05 16:59:57.635682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.615 [2024-11-05 16:59:57.635693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-05 16:59:57.645582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.615 [2024-11-05 16:59:57.645629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.615 [2024-11-05 16:59:57.645640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.615 [2024-11-05 16:59:57.645645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.615 [2024-11-05 16:59:57.645650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.615 [2024-11-05 16:59:57.645660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-05 16:59:57.655609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.615 [2024-11-05 16:59:57.655647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.615 [2024-11-05 16:59:57.655657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.615 [2024-11-05 16:59:57.655662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.615 [2024-11-05 16:59:57.655667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.615 [2024-11-05 16:59:57.655678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-05 16:59:57.665733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.615 [2024-11-05 16:59:57.665774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.615 [2024-11-05 16:59:57.665783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.615 [2024-11-05 16:59:57.665788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.615 [2024-11-05 16:59:57.665793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.615 [2024-11-05 16:59:57.665804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-05 16:59:57.675753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.615 [2024-11-05 16:59:57.675797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.615 [2024-11-05 16:59:57.675806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.615 [2024-11-05 16:59:57.675811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.615 [2024-11-05 16:59:57.675816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.615 [2024-11-05 16:59:57.675826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.685828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.685888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.685900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.685905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.877 [2024-11-05 16:59:57.685910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.877 [2024-11-05 16:59:57.685920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.877 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.695798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.695838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.695848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.695853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.877 [2024-11-05 16:59:57.695858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.877 [2024-11-05 16:59:57.695868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.877 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.705810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.705850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.705859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.705865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.877 [2024-11-05 16:59:57.705870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.877 [2024-11-05 16:59:57.705880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.877 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.715858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.715900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.715910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.715915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.877 [2024-11-05 16:59:57.715920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.877 [2024-11-05 16:59:57.715930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.877 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.725804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.725899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.725909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.725917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.877 [2024-11-05 16:59:57.725922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.877 [2024-11-05 16:59:57.725933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.877 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.735921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.735991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.736001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.736006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.877 [2024-11-05 16:59:57.736011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.877 [2024-11-05 16:59:57.736021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.877 qpair failed and we were unable to recover it. 00:35:50.877 [2024-11-05 16:59:57.745957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.877 [2024-11-05 16:59:57.745997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.877 [2024-11-05 16:59:57.746007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.877 [2024-11-05 16:59:57.746012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.746017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.746027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.755975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.756017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.756027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.756032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.756036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.756047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.766051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.766100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.766110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.766115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.766120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.766134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.775929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.775970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.775980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.775985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.775990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.776000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.786040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.786085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.786094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.786099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.786104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.786115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.796071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.796112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.796121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.796126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.796131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.796141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.806167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.806212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.806221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.806227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.806231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.806241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.816116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.816157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.816167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.816172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.816176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.816186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.826200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.826242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.826252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.826257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.826262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.826272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.836224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.836304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.836313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.836319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.836323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.836334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.846129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.846175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.846184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.846189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.846194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.846204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.856093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.856134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.856143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.856151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.856156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.856167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.866273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.866321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.866331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.866336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.866340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.878 [2024-11-05 16:59:57.866351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.878 qpair failed and we were unable to recover it. 00:35:50.878 [2024-11-05 16:59:57.876303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.878 [2024-11-05 16:59:57.876348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.878 [2024-11-05 16:59:57.876358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.878 [2024-11-05 16:59:57.876363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.878 [2024-11-05 16:59:57.876367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.876377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:50.879 [2024-11-05 16:59:57.886375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.879 [2024-11-05 16:59:57.886452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.879 [2024-11-05 16:59:57.886461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.879 [2024-11-05 16:59:57.886466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.879 [2024-11-05 16:59:57.886471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.886481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:50.879 [2024-11-05 16:59:57.896334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.879 [2024-11-05 16:59:57.896422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.879 [2024-11-05 16:59:57.896431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.879 [2024-11-05 16:59:57.896436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.879 [2024-11-05 16:59:57.896441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.896455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:50.879 [2024-11-05 16:59:57.906390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.879 [2024-11-05 16:59:57.906429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.879 [2024-11-05 16:59:57.906439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.879 [2024-11-05 16:59:57.906444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.879 [2024-11-05 16:59:57.906449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.906459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:50.879 [2024-11-05 16:59:57.916273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.879 [2024-11-05 16:59:57.916317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.879 [2024-11-05 16:59:57.916326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.879 [2024-11-05 16:59:57.916331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.879 [2024-11-05 16:59:57.916336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.916346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:50.879 [2024-11-05 16:59:57.926539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.879 [2024-11-05 16:59:57.926584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.879 [2024-11-05 16:59:57.926594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.879 [2024-11-05 16:59:57.926599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.879 [2024-11-05 16:59:57.926604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.926614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:50.879 [2024-11-05 16:59:57.936422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.879 [2024-11-05 16:59:57.936466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.879 [2024-11-05 16:59:57.936476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.879 [2024-11-05 16:59:57.936481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.879 [2024-11-05 16:59:57.936486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:50.879 [2024-11-05 16:59:57.936496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.879 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-05 16:59:57.946389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:57.946481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:57.946491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.141 [2024-11-05 16:59:57.946497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.141 [2024-11-05 16:59:57.946501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc4000b90 00:35:51.141 [2024-11-05 16:59:57.946511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Read completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 Write completed with error (sct=0, sc=8) 00:35:51.141 starting I/O failed 00:35:51.141 [2024-11-05 16:59:57.947562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.141 [2024-11-05 16:59:57.947807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485e00 is same with the state(6) to be set 00:35:51.141 [2024-11-05 16:59:57.956523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:57.956627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:57.956693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.141 [2024-11-05 16:59:57.956719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.141 [2024-11-05 16:59:57.956740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc0000b90 00:35:51.141 [2024-11-05 16:59:57.956807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-05 16:59:57.966583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:57.966683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:57.966731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.141 [2024-11-05 16:59:57.966757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.141 [2024-11-05 16:59:57.966774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cc0000b90 00:35:51.141 [2024-11-05 16:59:57.966815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-05 16:59:57.976581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:57.976676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:57.976741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.141 [2024-11-05 16:59:57.976785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.141 [2024-11-05 16:59:57.976808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0ccc000b90 00:35:51.141 [2024-11-05 16:59:57.976863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-05 16:59:57.986592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:57.986684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:57.986742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.141 [2024-11-05 16:59:57.986782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.141 [2024-11-05 16:59:57.986801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0ccc000b90 00:35:51.141 [2024-11-05 16:59:57.986851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-05 16:59:57.996633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:57.996722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:57.996752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.141 [2024-11-05 16:59:57.996762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.141 [2024-11-05 16:59:57.996769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24900c0 00:35:51.141 [2024-11-05 16:59:57.996789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.141 qpair failed and we were unable to recover it. 00:35:51.141 [2024-11-05 16:59:58.006587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.141 [2024-11-05 16:59:58.006694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.141 [2024-11-05 16:59:58.006724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.142 [2024-11-05 16:59:58.006733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.142 [2024-11-05 16:59:58.006740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24900c0 00:35:51.142 [2024-11-05 16:59:58.006763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.142 qpair failed and we were unable to recover it. 00:35:51.142 [2024-11-05 16:59:58.007151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2485e00 (9): Bad file descriptor 00:35:51.142 Initializing NVMe Controllers 00:35:51.142 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:51.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:51.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:51.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:51.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:51.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:51.142 Initialization complete. Launching workers. 00:35:51.142 Starting thread on core 1 00:35:51.142 Starting thread on core 2 00:35:51.142 Starting thread on core 3 00:35:51.142 Starting thread on core 0 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:51.142 00:35:51.142 real 0m11.409s 00:35:51.142 user 0m21.677s 00:35:51.142 sys 0m3.490s 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.142 ************************************ 00:35:51.142 END TEST nvmf_target_disconnect_tc2 00:35:51.142 ************************************ 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:51.142 rmmod nvme_tcp 00:35:51.142 rmmod nvme_fabrics 00:35:51.142 rmmod nvme_keyring 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 3369161 ']' 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 3369161 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3369161 ']' 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3369161 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:51.142 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3369161 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3369161' 00:35:51.402 killing process with pid 3369161 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3369161 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3369161 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@254 -- # local dev 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:51.402 16:59:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # return 0 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:53.321 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@274 -- # iptr 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:35:53.583 00:35:53.583 real 0m21.607s 00:35:53.583 user 0m49.636s 00:35:53.583 sys 0m9.404s 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:53.583 ************************************ 00:35:53.583 END TEST nvmf_target_disconnect 00:35:53.583 ************************************ 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:53.583 00:35:53.583 real 6m33.673s 00:35:53.583 user 11m30.184s 00:35:53.583 sys 2m12.757s 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:53.583 17:00:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.583 ************************************ 00:35:53.583 END TEST nvmf_host 00:35:53.583 ************************************ 00:35:53.583 17:00:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:53.583 17:00:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:53.583 17:00:00 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:53.583 17:00:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:53.583 17:00:00 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:53.583 17:00:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.583 ************************************ 00:35:53.583 START TEST nvmf_target_core_interrupt_mode 00:35:53.583 ************************************ 00:35:53.583 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:53.583 * Looking for test storage... 00:35:53.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:53.583 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:53.583 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:35:53.583 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.845 --rc genhtml_branch_coverage=1 00:35:53.845 --rc genhtml_function_coverage=1 00:35:53.845 --rc genhtml_legend=1 00:35:53.845 --rc geninfo_all_blocks=1 00:35:53.845 --rc geninfo_unexecuted_blocks=1 00:35:53.845 00:35:53.845 ' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.845 --rc genhtml_branch_coverage=1 00:35:53.845 --rc genhtml_function_coverage=1 00:35:53.845 --rc genhtml_legend=1 00:35:53.845 --rc geninfo_all_blocks=1 00:35:53.845 --rc geninfo_unexecuted_blocks=1 00:35:53.845 00:35:53.845 ' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.845 --rc genhtml_branch_coverage=1 00:35:53.845 --rc genhtml_function_coverage=1 00:35:53.845 --rc genhtml_legend=1 00:35:53.845 --rc geninfo_all_blocks=1 00:35:53.845 --rc geninfo_unexecuted_blocks=1 00:35:53.845 00:35:53.845 ' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.845 --rc genhtml_branch_coverage=1 00:35:53.845 --rc genhtml_function_coverage=1 00:35:53.845 --rc genhtml_legend=1 00:35:53.845 --rc geninfo_all_blocks=1 00:35:53.845 --rc geninfo_unexecuted_blocks=1 00:35:53.845 00:35:53.845 ' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.845 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:53.846 ************************************ 00:35:53.846 START TEST nvmf_abort 00:35:53.846 ************************************ 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:53.846 * Looking for test storage... 00:35:53.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:35:53.846 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:54.109 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.110 --rc genhtml_branch_coverage=1 00:35:54.110 --rc genhtml_function_coverage=1 00:35:54.110 --rc genhtml_legend=1 00:35:54.110 --rc geninfo_all_blocks=1 00:35:54.110 --rc geninfo_unexecuted_blocks=1 00:35:54.110 00:35:54.110 ' 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.110 --rc genhtml_branch_coverage=1 00:35:54.110 --rc genhtml_function_coverage=1 00:35:54.110 --rc genhtml_legend=1 00:35:54.110 --rc geninfo_all_blocks=1 00:35:54.110 --rc geninfo_unexecuted_blocks=1 00:35:54.110 00:35:54.110 ' 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.110 --rc genhtml_branch_coverage=1 00:35:54.110 --rc genhtml_function_coverage=1 00:35:54.110 --rc genhtml_legend=1 00:35:54.110 --rc geninfo_all_blocks=1 00:35:54.110 --rc geninfo_unexecuted_blocks=1 00:35:54.110 00:35:54.110 ' 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.110 --rc genhtml_branch_coverage=1 00:35:54.110 --rc genhtml_function_coverage=1 00:35:54.110 --rc genhtml_legend=1 00:35:54.110 --rc geninfo_all_blocks=1 00:35:54.110 --rc geninfo_unexecuted_blocks=1 00:35:54.110 00:35:54.110 ' 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:54.110 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:35:54.110 17:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:02.250 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:02.250 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.250 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:02.250 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:02.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:02.251 17:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:02.251 10.0.0.1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:02.251 10.0.0.2 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:02.251 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:02.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.611 ms 00:36:02.252 00:36:02.252 --- 10.0.0.1 ping statistics --- 00:36:02.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.252 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:02.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:36:02.252 00:36:02.252 --- 10.0.0.2 ping statistics --- 00:36:02.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.252 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:02.252 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:02.253 ' 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=3374743 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 3374743 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3374743 ']' 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:02.253 17:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.253 [2024-11-05 17:00:08.508292] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:02.253 [2024-11-05 17:00:08.509478] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:36:02.253 [2024-11-05 17:00:08.509534] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.253 [2024-11-05 17:00:08.608343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:02.253 [2024-11-05 17:00:08.660187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.253 [2024-11-05 17:00:08.660239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.253 [2024-11-05 17:00:08.660247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.253 [2024-11-05 17:00:08.660255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.253 [2024-11-05 17:00:08.660261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.253 [2024-11-05 17:00:08.662018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:02.253 [2024-11-05 17:00:08.662154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.253 [2024-11-05 17:00:08.662155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:02.253 [2024-11-05 17:00:08.737805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:02.253 [2024-11-05 17:00:08.737882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:02.253 [2024-11-05 17:00:08.738520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:02.253 [2024-11-05 17:00:08.738828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 [2024-11-05 17:00:09.367051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 Malloc0 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 Delay0 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.514 [2024-11-05 17:00:09.470978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.514 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.515 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.515 17:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:02.774 [2024-11-05 17:00:09.637831] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:04.685 Initializing NVMe Controllers 00:36:04.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:04.685 controller IO queue size 128 less than required 00:36:04.685 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:04.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:04.685 Initialization complete. Launching workers. 00:36:04.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28939 00:36:04.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28996, failed to submit 66 00:36:04.685 success 28939, unsuccessful 57, failed 0 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:04.685 rmmod nvme_tcp 00:36:04.685 rmmod nvme_fabrics 00:36:04.685 rmmod nvme_keyring 00:36:04.685 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 3374743 ']' 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 3374743 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3374743 ']' 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3374743 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3374743 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3374743' 00:36:04.946 killing process with pid 3374743 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3374743 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3374743 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:04.946 17:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:36:07.492 00:36:07.492 real 0m13.266s 00:36:07.492 user 0m10.980s 00:36:07.492 sys 0m6.739s 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.492 ************************************ 00:36:07.492 END TEST nvmf_abort 00:36:07.492 ************************************ 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:07.492 ************************************ 00:36:07.492 START TEST nvmf_ns_hotplug_stress 00:36:07.492 ************************************ 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:07.492 * Looking for test storage... 00:36:07.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.492 --rc genhtml_branch_coverage=1 00:36:07.492 --rc genhtml_function_coverage=1 00:36:07.492 --rc genhtml_legend=1 00:36:07.492 --rc geninfo_all_blocks=1 00:36:07.492 --rc geninfo_unexecuted_blocks=1 00:36:07.492 00:36:07.492 ' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.492 --rc genhtml_branch_coverage=1 00:36:07.492 --rc genhtml_function_coverage=1 00:36:07.492 --rc genhtml_legend=1 00:36:07.492 --rc geninfo_all_blocks=1 00:36:07.492 --rc geninfo_unexecuted_blocks=1 00:36:07.492 00:36:07.492 ' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.492 --rc genhtml_branch_coverage=1 00:36:07.492 --rc genhtml_function_coverage=1 00:36:07.492 --rc genhtml_legend=1 00:36:07.492 --rc geninfo_all_blocks=1 00:36:07.492 --rc geninfo_unexecuted_blocks=1 00:36:07.492 00:36:07.492 ' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:07.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.492 --rc genhtml_branch_coverage=1 00:36:07.492 --rc genhtml_function_coverage=1 00:36:07.492 --rc genhtml_legend=1 00:36:07.492 --rc geninfo_all_blocks=1 00:36:07.492 --rc geninfo_unexecuted_blocks=1 00:36:07.492 00:36:07.492 ' 00:36:07.492 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:36:07.493 17:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:15.634 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:15.634 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:15.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:15.634 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:15.635 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:15.635 10.0.0.1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:15.635 10.0.0.2 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:15.635 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:15.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.608 ms 00:36:15.636 00:36:15.636 --- 10.0.0.1 ping statistics --- 00:36:15.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.636 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:15.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:36:15.636 00:36:15.636 --- 10.0.0.2 ping statistics --- 00:36:15.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.636 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:15.636 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:15.637 ' 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=3380157 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 3380157 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3380157 ']' 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:15.637 17:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.637 [2024-11-05 17:00:21.686756] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.637 [2024-11-05 17:00:21.687896] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:36:15.637 [2024-11-05 17:00:21.687952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.637 [2024-11-05 17:00:21.785722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:15.637 [2024-11-05 17:00:21.836380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.637 [2024-11-05 17:00:21.836434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.637 [2024-11-05 17:00:21.836443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.637 [2024-11-05 17:00:21.836450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.637 [2024-11-05 17:00:21.836456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.637 [2024-11-05 17:00:21.838252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.637 [2024-11-05 17:00:21.838421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.637 [2024-11-05 17:00:21.838422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.637 [2024-11-05 17:00:21.914607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.637 [2024-11-05 17:00:21.914676] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:15.637 [2024-11-05 17:00:21.915439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:15.637 [2024-11-05 17:00:21.915818] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:15.637 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:15.898 [2024-11-05 17:00:22.735260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.898 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:15.898 17:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:16.159 [2024-11-05 17:00:23.104001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.159 17:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:16.420 17:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:16.680 Malloc0 00:36:16.680 17:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:16.680 Delay0 00:36:16.681 17:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.941 17:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:16.941 NULL1 00:36:17.202 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:17.202 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3380590 00:36:17.202 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:17.202 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:17.202 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.462 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.723 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:17.723 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:17.723 true 00:36:17.723 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:17.723 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.984 17:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.244 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:18.244 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:18.506 true 00:36:18.506 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:18.506 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.506 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.767 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:18.767 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:19.028 true 00:36:19.028 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:19.028 17:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.028 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.290 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:19.290 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:19.551 true 00:36:19.551 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:19.551 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.812 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.812 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:19.812 17:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:20.073 true 00:36:20.073 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:20.073 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.334 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.595 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:20.595 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:20.596 true 00:36:20.596 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:20.596 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.857 17:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.119 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:21.119 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:21.380 true 00:36:21.380 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:21.380 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.380 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.641 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:21.641 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:21.902 true 00:36:21.902 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:21.902 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.902 17:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.163 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:22.163 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:22.424 true 00:36:22.424 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:22.424 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.684 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.684 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:22.684 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:22.944 true 00:36:22.944 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:22.944 17:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.204 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.204 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:23.204 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:23.465 true 00:36:23.465 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:23.465 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.725 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.984 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:23.984 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:23.984 true 00:36:23.984 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:23.984 17:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.244 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.505 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:24.505 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:24.505 true 00:36:24.505 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:24.505 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.766 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.027 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:25.027 17:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:25.027 true 00:36:25.027 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:25.027 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.287 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.548 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:25.548 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:25.548 true 00:36:25.809 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:25.809 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.809 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.070 17:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:26.070 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:26.331 true 00:36:26.331 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:26.331 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.331 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.593 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:26.593 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:26.854 true 00:36:26.854 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:26.854 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.115 17:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.115 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:27.115 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:27.375 true 00:36:27.375 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:27.375 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.636 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.636 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:27.636 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:27.896 true 00:36:27.896 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:27.896 17:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.157 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.417 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:28.418 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:28.418 true 00:36:28.418 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:28.418 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.677 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.938 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:28.938 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:28.938 true 00:36:28.938 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:28.938 17:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.198 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.483 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:29.483 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:29.483 true 00:36:29.483 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:29.483 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.798 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.090 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:30.090 17:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:30.090 true 00:36:30.090 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:30.090 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.350 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.610 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:30.610 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:30.610 true 00:36:30.610 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:30.610 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.870 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.130 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:31.130 17:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:31.130 true 00:36:31.130 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:31.130 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.390 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.650 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:31.650 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:31.650 true 00:36:31.910 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:31.910 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.910 17:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.170 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:32.170 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:32.430 true 00:36:32.430 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:32.430 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.430 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.703 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:32.703 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:32.969 true 00:36:32.969 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:32.969 17:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.969 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.230 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:33.230 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:33.490 true 00:36:33.490 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:33.490 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.751 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.751 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:33.751 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:34.011 true 00:36:34.011 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:34.011 17:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.272 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.533 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:36:34.533 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:36:34.533 true 00:36:34.533 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:34.533 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.793 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.054 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:36:35.054 17:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:36:35.054 true 00:36:35.054 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:35.054 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.315 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.576 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:36:35.576 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:36:35.837 true 00:36:35.837 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:35.837 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.837 17:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.098 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:36:36.098 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:36:36.360 true 00:36:36.360 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:36.360 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.360 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.620 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:36:36.620 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:36:36.881 true 00:36:36.881 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:36.881 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.142 17:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.142 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:36:37.142 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:36:37.402 true 00:36:37.402 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:37.402 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.662 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.662 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:36:37.662 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:36:37.922 true 00:36:37.922 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:37.922 17:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.183 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.443 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:36:38.443 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:36:38.443 true 00:36:38.443 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:38.443 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.704 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.964 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:36:38.964 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:36:38.964 true 00:36:38.964 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:38.964 17:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.224 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.484 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:36:39.484 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:36:39.484 true 00:36:39.745 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:39.745 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.745 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.004 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:36:40.004 17:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:36:40.264 true 00:36:40.264 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:40.264 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.264 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.524 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:36:40.524 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:36:40.785 true 00:36:40.785 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:40.785 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.045 17:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.045 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:36:41.045 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:36:41.306 true 00:36:41.306 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:41.306 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.567 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.567 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:36:41.567 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:36:41.828 true 00:36:41.828 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:41.828 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.089 17:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.089 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:36:42.089 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:36:42.349 true 00:36:42.349 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:42.349 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.610 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.871 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:36:42.871 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:36:42.871 true 00:36:42.871 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:42.871 17:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.131 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.392 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:36:43.392 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:36:43.392 true 00:36:43.392 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:43.392 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.650 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.910 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:36:43.910 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:36:43.910 true 00:36:44.170 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:44.170 17:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.170 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.431 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:36:44.431 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:36:44.691 true 00:36:44.691 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:44.691 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.691 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.951 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:36:44.951 17:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:36:45.211 true 00:36:45.211 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:45.211 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.211 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.472 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:36:45.472 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:36:45.732 true 00:36:45.732 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:45.732 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.992 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.992 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:36:45.992 17:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:36:46.254 true 00:36:46.254 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:46.254 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.515 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.515 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:36:46.515 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:36:46.775 true 00:36:46.775 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:46.775 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.036 17:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.297 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:36:47.297 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:36:47.297 true 00:36:47.297 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:47.297 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.558 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.558 Initializing NVMe Controllers 00:36:47.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:47.558 Controller IO queue size 128, less than required. 00:36:47.558 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:47.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:47.558 Initialization complete. Launching workers. 00:36:47.558 ======================================================== 00:36:47.558 Latency(us) 00:36:47.558 Device Information : IOPS MiB/s Average min max 00:36:47.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29734.58 14.52 4304.66 1483.07 11240.77 00:36:47.558 ======================================================== 00:36:47.558 Total : 29734.58 14.52 4304.66 1483.07 11240.77 00:36:47.558 00:36:47.819 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:36:47.819 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:36:47.819 true 00:36:47.819 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3380590 00:36:47.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3380590) - No such process 00:36:47.819 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3380590 00:36:47.819 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.080 17:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:48.080 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:48.080 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:48.080 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:48.080 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.080 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:48.341 null0 00:36:48.341 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.341 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.341 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:48.603 null1 00:36:48.603 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.603 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.603 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:48.603 null2 00:36:48.603 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.603 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.603 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:48.864 null3 00:36:48.864 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.864 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.864 17:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:49.125 null4 00:36:49.125 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.125 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.125 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:49.125 null5 00:36:49.386 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.386 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.386 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:49.386 null6 00:36:49.386 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.386 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.386 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:49.649 null7 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3386939 3386941 3386945 3386947 3386950 3386953 3386956 3386958 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.649 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:49.912 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.173 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.173 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.173 17:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.173 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.433 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.695 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.957 17:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.957 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.957 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.957 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.219 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.480 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.740 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.000 17:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.000 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.000 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.000 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.000 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.260 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.520 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.788 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.049 17:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.049 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.310 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:53.571 rmmod nvme_tcp 00:36:53.571 rmmod nvme_fabrics 00:36:53.571 rmmod nvme_keyring 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 3380157 ']' 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 3380157 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3380157 ']' 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3380157 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:53.571 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3380157 00:36:53.831 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3380157' 00:36:53.832 killing process with pid 3380157 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3380157 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3380157 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:53.832 17:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:36:56.376 00:36:56.376 real 0m48.802s 00:36:56.376 user 3m3.551s 00:36:56.376 sys 0m21.940s 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:56.376 ************************************ 00:36:56.376 END TEST nvmf_ns_hotplug_stress 00:36:56.376 ************************************ 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:56.376 17:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:56.376 ************************************ 00:36:56.376 START TEST nvmf_delete_subsystem 00:36:56.376 ************************************ 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:56.376 * Looking for test storage... 00:36:56.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:56.376 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.377 --rc genhtml_branch_coverage=1 00:36:56.377 --rc genhtml_function_coverage=1 00:36:56.377 --rc genhtml_legend=1 00:36:56.377 --rc geninfo_all_blocks=1 00:36:56.377 --rc geninfo_unexecuted_blocks=1 00:36:56.377 00:36:56.377 ' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.377 --rc genhtml_branch_coverage=1 00:36:56.377 --rc genhtml_function_coverage=1 00:36:56.377 --rc genhtml_legend=1 00:36:56.377 --rc geninfo_all_blocks=1 00:36:56.377 --rc geninfo_unexecuted_blocks=1 00:36:56.377 00:36:56.377 ' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.377 --rc genhtml_branch_coverage=1 00:36:56.377 --rc genhtml_function_coverage=1 00:36:56.377 --rc genhtml_legend=1 00:36:56.377 --rc geninfo_all_blocks=1 00:36:56.377 --rc geninfo_unexecuted_blocks=1 00:36:56.377 00:36:56.377 ' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.377 --rc genhtml_branch_coverage=1 00:36:56.377 --rc genhtml_function_coverage=1 00:36:56.377 --rc genhtml_legend=1 00:36:56.377 --rc geninfo_all_blocks=1 00:36:56.377 --rc geninfo_unexecuted_blocks=1 00:36:56.377 00:36:56.377 ' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:36:56.377 17:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:04.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:04.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:04.522 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:04.523 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:04.523 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:04.523 10.0.0.1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:04.523 10.0.0.2 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:04.523 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:04.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.592 ms 00:37:04.524 00:37:04.524 --- 10.0.0.1 ping statistics --- 00:37:04.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.524 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:04.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:37:04.524 00:37:04.524 --- 10.0.0.2 ping statistics --- 00:37:04.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.524 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:04.524 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:37:04.525 ' 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=3392021 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 3392021 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3392021 ']' 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:04.525 17:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.525 [2024-11-05 17:01:10.628998] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:04.525 [2024-11-05 17:01:10.629982] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:37:04.525 [2024-11-05 17:01:10.630020] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.525 [2024-11-05 17:01:10.706032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:04.525 [2024-11-05 17:01:10.740853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:04.525 [2024-11-05 17:01:10.740887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:04.525 [2024-11-05 17:01:10.740897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:04.525 [2024-11-05 17:01:10.740904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:04.525 [2024-11-05 17:01:10.740912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:04.525 [2024-11-05 17:01:10.742055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.525 [2024-11-05 17:01:10.742056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.525 [2024-11-05 17:01:10.796559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:04.525 [2024-11-05 17:01:10.797070] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:04.525 [2024-11-05 17:01:10.797411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.525 [2024-11-05 17:01:11.462622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.525 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.526 [2024-11-05 17:01:11.490920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.526 NULL1 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.526 Delay0 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3392311 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:04.526 17:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:04.787 [2024-11-05 17:01:11.588001] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:06.696 17:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:06.696 17:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.696 17:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 [2024-11-05 17:01:13.788381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c960 is same with the state(6) to be set 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 starting I/O failed: -6 00:37:06.958 starting I/O failed: -6 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Read completed with error (sct=0, sc=8) 00:37:06.958 Write completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Write completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Write completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Write completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 Read completed with error (sct=0, sc=8) 00:37:06.959 [2024-11-05 17:01:13.793196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb63c00d490 is same with the state(6) to be set 00:37:07.900 [2024-11-05 17:01:14.767797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153d9a0 is same with the state(6) to be set 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 [2024-11-05 17:01:14.791931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153cb40 is same with the state(6) to be set 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 [2024-11-05 17:01:14.792180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c780 is same with the state(6) to be set 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 [2024-11-05 17:01:14.796666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb63c00d020 is same with the state(6) to be set 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Read completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 Write completed with error (sct=0, sc=8) 00:37:07.900 [2024-11-05 17:01:14.796771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb63c00d7c0 is same with the state(6) to be set 00:37:07.900 Initializing NVMe Controllers 00:37:07.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:07.900 Controller IO queue size 128, less than required. 00:37:07.901 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:07.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:07.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:07.901 Initialization complete. Launching workers. 00:37:07.901 ======================================================== 00:37:07.901 Latency(us) 00:37:07.901 Device Information : IOPS MiB/s Average min max 00:37:07.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.32 0.08 908897.35 233.40 1007467.27 00:37:07.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.82 0.08 935843.04 399.99 2003607.67 00:37:07.901 ======================================================== 00:37:07.901 Total : 326.13 0.16 922349.63 233.40 2003607.67 00:37:07.901 00:37:07.901 [2024-11-05 17:01:14.797357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153d9a0 (9): Bad file descriptor 00:37:07.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:07.901 17:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.901 17:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:07.901 17:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3392311 00:37:07.901 17:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3392311 00:37:08.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3392311) - No such process 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3392311 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3392311 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3392311 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.471 [2024-11-05 17:01:15.330882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3392981 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:08.471 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:08.471 [2024-11-05 17:01:15.406126] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:09.042 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:09.042 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:09.042 17:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:09.302 17:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:09.302 17:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:09.302 17:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:09.873 17:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:09.873 17:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:09.873 17:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:10.443 17:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:10.443 17:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:10.443 17:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:11.014 17:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:11.014 17:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:11.014 17:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:11.585 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:11.585 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:11.585 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:11.585 Initializing NVMe Controllers 00:37:11.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:11.585 Controller IO queue size 128, less than required. 00:37:11.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:11.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:11.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:11.585 Initialization complete. Launching workers. 00:37:11.585 ======================================================== 00:37:11.585 Latency(us) 00:37:11.585 Device Information : IOPS MiB/s Average min max 00:37:11.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002412.52 1000289.04 1005989.12 00:37:11.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004090.08 1000433.54 1009991.28 00:37:11.585 ======================================================== 00:37:11.585 Total : 256.00 0.12 1003251.30 1000289.04 1009991.28 00:37:11.585 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3392981 00:37:11.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3392981) - No such process 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3392981 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:11.845 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:11.845 rmmod nvme_tcp 00:37:12.105 rmmod nvme_fabrics 00:37:12.105 rmmod nvme_keyring 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 3392021 ']' 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 3392021 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3392021 ']' 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3392021 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:12.105 17:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3392021 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3392021' 00:37:12.105 killing process with pid 3392021 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3392021 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3392021 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:12.105 17:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:37:14.653 00:37:14.653 real 0m18.233s 00:37:14.653 user 0m26.639s 00:37:14.653 sys 0m7.267s 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.653 ************************************ 00:37:14.653 END TEST nvmf_delete_subsystem 00:37:14.653 ************************************ 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:14.653 ************************************ 00:37:14.653 START TEST nvmf_host_management 00:37:14.653 ************************************ 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:14.653 * Looking for test storage... 00:37:14.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.653 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.654 --rc genhtml_branch_coverage=1 00:37:14.654 --rc genhtml_function_coverage=1 00:37:14.654 --rc genhtml_legend=1 00:37:14.654 --rc geninfo_all_blocks=1 00:37:14.654 --rc geninfo_unexecuted_blocks=1 00:37:14.654 00:37:14.654 ' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.654 --rc genhtml_branch_coverage=1 00:37:14.654 --rc genhtml_function_coverage=1 00:37:14.654 --rc genhtml_legend=1 00:37:14.654 --rc geninfo_all_blocks=1 00:37:14.654 --rc geninfo_unexecuted_blocks=1 00:37:14.654 00:37:14.654 ' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.654 --rc genhtml_branch_coverage=1 00:37:14.654 --rc genhtml_function_coverage=1 00:37:14.654 --rc genhtml_legend=1 00:37:14.654 --rc geninfo_all_blocks=1 00:37:14.654 --rc geninfo_unexecuted_blocks=1 00:37:14.654 00:37:14.654 ' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.654 --rc genhtml_branch_coverage=1 00:37:14.654 --rc genhtml_function_coverage=1 00:37:14.654 --rc genhtml_legend=1 00:37:14.654 --rc geninfo_all_blocks=1 00:37:14.654 --rc geninfo_unexecuted_blocks=1 00:37:14.654 00:37:14.654 ' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:14.654 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:37:14.655 17:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:22.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:22.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:22.980 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:22.980 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:22.980 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:22.981 10.0.0.1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:22.981 10.0.0.2 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:22.981 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:22.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:22.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.573 ms 00:37:22.982 00:37:22.982 --- 10.0.0.1 ping statistics --- 00:37:22.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.982 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:22.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:22.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:37:22.982 00:37:22.982 --- 10.0.0.2 ping statistics --- 00:37:22.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.982 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:22.982 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:37:22.983 ' 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=3397924 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 3397924 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3397924 ']' 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:22.983 17:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 [2024-11-05 17:01:28.962736] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:22.983 [2024-11-05 17:01:28.963915] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:37:22.983 [2024-11-05 17:01:28.963969] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.983 [2024-11-05 17:01:29.064645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:22.983 [2024-11-05 17:01:29.117402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.983 [2024-11-05 17:01:29.117455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.983 [2024-11-05 17:01:29.117464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:22.983 [2024-11-05 17:01:29.117472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.983 [2024-11-05 17:01:29.117478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.983 [2024-11-05 17:01:29.119450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:22.983 [2024-11-05 17:01:29.119617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:22.983 [2024-11-05 17:01:29.119829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:22.983 [2024-11-05 17:01:29.119848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.983 [2024-11-05 17:01:29.195934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:22.983 [2024-11-05 17:01:29.196600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:22.983 [2024-11-05 17:01:29.197430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:22.983 [2024-11-05 17:01:29.197633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:22.983 [2024-11-05 17:01:29.197775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 [2024-11-05 17:01:29.844838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 Malloc0 00:37:22.983 [2024-11-05 17:01:29.933070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3398054 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3398054 /var/tmp/bdevperf.sock 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3398054 ']' 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:22.983 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:22.984 { 00:37:22.984 "params": { 00:37:22.984 "name": "Nvme$subsystem", 00:37:22.984 "trtype": "$TEST_TRANSPORT", 00:37:22.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.984 "adrfam": "ipv4", 00:37:22.984 "trsvcid": "$NVMF_PORT", 00:37:22.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.984 "hdgst": ${hdgst:-false}, 00:37:22.984 "ddgst": ${ddgst:-false} 00:37:22.984 }, 00:37:22.984 "method": "bdev_nvme_attach_controller" 00:37:22.984 } 00:37:22.984 EOF 00:37:22.984 )") 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:37:22.984 17:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:22.984 "params": { 00:37:22.984 "name": "Nvme0", 00:37:22.984 "trtype": "tcp", 00:37:22.984 "traddr": "10.0.0.2", 00:37:22.984 "adrfam": "ipv4", 00:37:22.984 "trsvcid": "4420", 00:37:22.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.984 "hdgst": false, 00:37:22.984 "ddgst": false 00:37:22.984 }, 00:37:22.984 "method": "bdev_nvme_attach_controller" 00:37:22.984 }' 00:37:23.245 [2024-11-05 17:01:30.047325] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:37:23.245 [2024-11-05 17:01:30.047394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398054 ] 00:37:23.245 [2024-11-05 17:01:30.118979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.245 [2024-11-05 17:01:30.155457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.506 Running I/O for 10 seconds... 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:24.080 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.081 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.081 [2024-11-05 17:01:30.912710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.912991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.081 [2024-11-05 17:01:30.913341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.081 [2024-11-05 17:01:30.913349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.082 [2024-11-05 17:01:30.913872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.913881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e41a0 is same with the state(6) to be set 00:37:24.082 [2024-11-05 17:01:30.915152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:24.082 task offset: 105216 on job bdev=Nvme0n1 fails 00:37:24.082 00:37:24.082 Latency(us) 00:37:24.082 [2024-11-05T16:01:31.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.082 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:24.082 Job: Nvme0n1 ended in about 0.58 seconds with error 00:37:24.082 Verification LBA range: start 0x0 length 0x400 00:37:24.082 Nvme0n1 : 0.58 1334.71 83.42 111.23 0.00 43240.30 1679.36 36263.25 00:37:24.082 [2024-11-05T16:01:31.145Z] =================================================================================================================== 00:37:24.082 [2024-11-05T16:01:31.145Z] Total : 1334.71 83.42 111.23 0.00 43240.30 1679.36 36263.25 00:37:24.082 [2024-11-05 17:01:30.917173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:24.082 [2024-11-05 17:01:30.917198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb000 (9): Bad file descriptor 00:37:24.082 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.082 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:24.082 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.082 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.082 [2024-11-05 17:01:30.918626] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:24.082 [2024-11-05 17:01:30.918709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:24.082 [2024-11-05 17:01:30.918740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.082 [2024-11-05 17:01:30.918765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:24.082 [2024-11-05 17:01:30.918774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:24.082 [2024-11-05 17:01:30.918783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.083 [2024-11-05 17:01:30.918791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xecb000 00:37:24.083 [2024-11-05 17:01:30.918813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb000 (9): Bad file descriptor 00:37:24.083 [2024-11-05 17:01:30.918827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:24.083 [2024-11-05 17:01:30.918836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:24.083 [2024-11-05 17:01:30.918846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:24.083 [2024-11-05 17:01:30.918855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:24.083 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.083 17:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3398054 00:37:25.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3398054) - No such process 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:25.026 { 00:37:25.026 "params": { 00:37:25.026 "name": "Nvme$subsystem", 00:37:25.026 "trtype": "$TEST_TRANSPORT", 00:37:25.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:25.026 "adrfam": "ipv4", 00:37:25.026 "trsvcid": "$NVMF_PORT", 00:37:25.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:25.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:25.026 "hdgst": ${hdgst:-false}, 00:37:25.026 "ddgst": ${ddgst:-false} 00:37:25.026 }, 00:37:25.026 "method": "bdev_nvme_attach_controller" 00:37:25.026 } 00:37:25.026 EOF 00:37:25.026 )") 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:37:25.026 17:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:25.026 "params": { 00:37:25.026 "name": "Nvme0", 00:37:25.026 "trtype": "tcp", 00:37:25.026 "traddr": "10.0.0.2", 00:37:25.026 "adrfam": "ipv4", 00:37:25.026 "trsvcid": "4420", 00:37:25.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.026 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:25.026 "hdgst": false, 00:37:25.026 "ddgst": false 00:37:25.026 }, 00:37:25.026 "method": "bdev_nvme_attach_controller" 00:37:25.026 }' 00:37:25.026 [2024-11-05 17:01:31.997413] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:37:25.026 [2024-11-05 17:01:31.997466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398441 ] 00:37:25.026 [2024-11-05 17:01:32.068613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.288 [2024-11-05 17:01:32.103915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.549 Running I/O for 1 seconds... 00:37:26.490 1408.00 IOPS, 88.00 MiB/s 00:37:26.490 Latency(us) 00:37:26.490 [2024-11-05T16:01:33.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.490 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:26.490 Verification LBA range: start 0x0 length 0x400 00:37:26.490 Nvme0n1 : 1.03 1426.97 89.19 0.00 0.00 44117.57 11031.89 36918.61 00:37:26.490 [2024-11-05T16:01:33.553Z] =================================================================================================================== 00:37:26.490 [2024-11-05T16:01:33.553Z] Total : 1426.97 89.19 0.00 0.00 44117.57 11031.89 36918.61 00:37:26.751 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:26.751 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:26.752 rmmod nvme_tcp 00:37:26.752 rmmod nvme_fabrics 00:37:26.752 rmmod nvme_keyring 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 3397924 ']' 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 3397924 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3397924 ']' 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3397924 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3397924 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3397924' 00:37:26.752 killing process with pid 3397924 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3397924 00:37:26.752 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3397924 00:37:27.015 [2024-11-05 17:01:33.819411] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:27.015 17:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:28.930 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:28.931 00:37:28.931 real 0m14.624s 00:37:28.931 user 0m19.564s 00:37:28.931 sys 0m7.407s 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.931 ************************************ 00:37:28.931 END TEST nvmf_host_management 00:37:28.931 ************************************ 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:28.931 17:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:29.193 ************************************ 00:37:29.193 START TEST nvmf_lvol 00:37:29.193 ************************************ 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:29.193 * Looking for test storage... 00:37:29.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.193 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.194 --rc genhtml_branch_coverage=1 00:37:29.194 --rc genhtml_function_coverage=1 00:37:29.194 --rc genhtml_legend=1 00:37:29.194 --rc geninfo_all_blocks=1 00:37:29.194 --rc geninfo_unexecuted_blocks=1 00:37:29.194 00:37:29.194 ' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.194 --rc genhtml_branch_coverage=1 00:37:29.194 --rc genhtml_function_coverage=1 00:37:29.194 --rc genhtml_legend=1 00:37:29.194 --rc geninfo_all_blocks=1 00:37:29.194 --rc geninfo_unexecuted_blocks=1 00:37:29.194 00:37:29.194 ' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.194 --rc genhtml_branch_coverage=1 00:37:29.194 --rc genhtml_function_coverage=1 00:37:29.194 --rc genhtml_legend=1 00:37:29.194 --rc geninfo_all_blocks=1 00:37:29.194 --rc geninfo_unexecuted_blocks=1 00:37:29.194 00:37:29.194 ' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.194 --rc genhtml_branch_coverage=1 00:37:29.194 --rc genhtml_function_coverage=1 00:37:29.194 --rc genhtml_legend=1 00:37:29.194 --rc geninfo_all_blocks=1 00:37:29.194 --rc geninfo_unexecuted_blocks=1 00:37:29.194 00:37:29.194 ' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:29.194 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:37:29.195 17:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:37.343 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:37.343 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:37.343 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:37.343 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:37.343 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:37.344 10.0.0.1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:37.344 10.0.0.2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:37.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.574 ms 00:37:37.344 00:37:37.344 --- 10.0.0.1 ping statistics --- 00:37:37.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.344 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:37.344 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:37.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:37:37.345 00:37:37.345 --- 10.0.0.2 ping statistics --- 00:37:37.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.345 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:37:37.345 ' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=3403104 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 3403104 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3403104 ']' 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:37.345 17:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:37.345 [2024-11-05 17:01:43.718352] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:37.345 [2024-11-05 17:01:43.720297] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:37:37.345 [2024-11-05 17:01:43.720374] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.345 [2024-11-05 17:01:43.805695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:37.345 [2024-11-05 17:01:43.846938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.345 [2024-11-05 17:01:43.846975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.345 [2024-11-05 17:01:43.846986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.345 [2024-11-05 17:01:43.846993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.345 [2024-11-05 17:01:43.846999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.345 [2024-11-05 17:01:43.848579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.346 [2024-11-05 17:01:43.848715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:37.346 [2024-11-05 17:01:43.848717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.346 [2024-11-05 17:01:43.904355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:37.346 [2024-11-05 17:01:43.904857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:37.346 [2024-11-05 17:01:43.905171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:37.346 [2024-11-05 17:01:43.905446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:37.606 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:37.606 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:37:37.606 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:37.606 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:37.606 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:37.607 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:37.607 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:37.867 [2024-11-05 17:01:44.709427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.867 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:38.127 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:38.127 17:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:38.127 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:38.127 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:38.387 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:38.646 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=709f9da0-1297-4fad-9866-a24ce6cdd560 00:37:38.646 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 709f9da0-1297-4fad-9866-a24ce6cdd560 lvol 20 00:37:38.646 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2dd9b388-0bb9-49d7-b655-a772d5f670f1 00:37:38.646 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:38.906 17:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2dd9b388-0bb9-49d7-b655-a772d5f670f1 00:37:39.166 17:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.166 [2024-11-05 17:01:46.161361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.166 17:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:39.426 17:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3403642 00:37:39.426 17:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:39.426 17:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:40.367 17:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2dd9b388-0bb9-49d7-b655-a772d5f670f1 MY_SNAPSHOT 00:37:40.628 17:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fc5dbc2a-6b37-44a1-ad73-17c4712b2ede 00:37:40.628 17:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2dd9b388-0bb9-49d7-b655-a772d5f670f1 30 00:37:40.888 17:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fc5dbc2a-6b37-44a1-ad73-17c4712b2ede MY_CLONE 00:37:41.149 17:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bee28b95-e595-41a2-ba9a-9cc0784f018d 00:37:41.149 17:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bee28b95-e595-41a2-ba9a-9cc0784f018d 00:37:41.409 17:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3403642 00:37:49.547 Initializing NVMe Controllers 00:37:49.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:49.547 Controller IO queue size 128, less than required. 00:37:49.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:49.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:49.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:49.547 Initialization complete. Launching workers. 00:37:49.547 ======================================================== 00:37:49.547 Latency(us) 00:37:49.547 Device Information : IOPS MiB/s Average min max 00:37:49.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12190.00 47.62 10503.44 3824.14 45842.56 00:37:49.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15922.40 62.20 8039.63 3843.01 69269.60 00:37:49.547 ======================================================== 00:37:49.547 Total : 28112.40 109.81 9107.98 3824.14 69269.60 00:37:49.547 00:37:49.547 17:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:49.808 17:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2dd9b388-0bb9-49d7-b655-a772d5f670f1 00:37:50.068 17:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 709f9da0-1297-4fad-9866-a24ce6cdd560 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:50.068 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:50.068 rmmod nvme_tcp 00:37:50.328 rmmod nvme_fabrics 00:37:50.328 rmmod nvme_keyring 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 3403104 ']' 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 3403104 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3403104 ']' 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3403104 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3403104 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3403104' 00:37:50.328 killing process with pid 3403104 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3403104 00:37:50.328 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3403104 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:50.588 17:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:37:52.501 00:37:52.501 real 0m23.467s 00:37:52.501 user 0m55.143s 00:37:52.501 sys 0m10.396s 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:52.501 ************************************ 00:37:52.501 END TEST nvmf_lvol 00:37:52.501 ************************************ 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:52.501 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:52.775 ************************************ 00:37:52.775 START TEST nvmf_lvs_grow 00:37:52.775 ************************************ 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:52.775 * Looking for test storage... 00:37:52.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:52.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.775 --rc genhtml_branch_coverage=1 00:37:52.775 --rc genhtml_function_coverage=1 00:37:52.775 --rc genhtml_legend=1 00:37:52.775 --rc geninfo_all_blocks=1 00:37:52.775 --rc geninfo_unexecuted_blocks=1 00:37:52.775 00:37:52.775 ' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:52.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.775 --rc genhtml_branch_coverage=1 00:37:52.775 --rc genhtml_function_coverage=1 00:37:52.775 --rc genhtml_legend=1 00:37:52.775 --rc geninfo_all_blocks=1 00:37:52.775 --rc geninfo_unexecuted_blocks=1 00:37:52.775 00:37:52.775 ' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:52.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.775 --rc genhtml_branch_coverage=1 00:37:52.775 --rc genhtml_function_coverage=1 00:37:52.775 --rc genhtml_legend=1 00:37:52.775 --rc geninfo_all_blocks=1 00:37:52.775 --rc geninfo_unexecuted_blocks=1 00:37:52.775 00:37:52.775 ' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:52.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.775 --rc genhtml_branch_coverage=1 00:37:52.775 --rc genhtml_function_coverage=1 00:37:52.775 --rc genhtml_legend=1 00:37:52.775 --rc geninfo_all_blocks=1 00:37:52.775 --rc geninfo_unexecuted_blocks=1 00:37:52.775 00:37:52.775 ' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:52.775 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:37:52.776 17:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:00.916 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:00.917 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:00.917 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:00.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:00.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:00.917 10.0.0.1 00:38:00.917 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:00.918 10.0.0.2 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:00.918 17:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:00.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:00.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.557 ms 00:38:00.918 00:38:00.918 --- 10.0.0.1 ping statistics --- 00:38:00.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.918 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:00.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:00.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:38:00.918 00:38:00.918 --- 10.0.0.2 ping statistics --- 00:38:00.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.918 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:00.918 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:00.919 ' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=3409835 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 3409835 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3409835 ']' 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:00.919 17:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.919 [2024-11-05 17:02:07.246845] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:00.919 [2024-11-05 17:02:07.247981] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:00.919 [2024-11-05 17:02:07.248033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.919 [2024-11-05 17:02:07.329479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.919 [2024-11-05 17:02:07.369888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.919 [2024-11-05 17:02:07.369924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.919 [2024-11-05 17:02:07.369934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.919 [2024-11-05 17:02:07.369943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.919 [2024-11-05 17:02:07.369950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.919 [2024-11-05 17:02:07.370538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.919 [2024-11-05 17:02:07.426518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:00.919 [2024-11-05 17:02:07.426781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.181 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:01.442 [2024-11-05 17:02:08.267331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:01.442 ************************************ 00:38:01.442 START TEST lvs_grow_clean 00:38:01.442 ************************************ 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:01.442 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:01.703 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:01.704 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:01.704 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:01.704 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:01.704 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:01.964 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:01.964 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:01.964 17:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0b0273f4-610a-4267-9215-9e18b8d97ecb lvol 150 00:38:02.226 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8396b448-90cc-4c1b-a40d-ff5973b32b8d 00:38:02.226 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:02.226 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:02.226 [2024-11-05 17:02:09.210918] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:02.226 [2024-11-05 17:02:09.211005] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:02.226 true 00:38:02.226 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:02.226 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:02.487 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:02.487 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:02.747 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8396b448-90cc-4c1b-a40d-ff5973b32b8d 00:38:02.747 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:03.007 [2024-11-05 17:02:09.899570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.007 17:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3410534 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3410534 /var/tmp/bdevperf.sock 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3410534 ']' 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:03.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:03.268 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.268 [2024-11-05 17:02:10.143432] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:03.268 [2024-11-05 17:02:10.143515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410534 ] 00:38:03.268 [2024-11-05 17:02:10.235166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.268 [2024-11-05 17:02:10.288334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.210 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:04.210 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:38:04.210 17:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:04.472 Nvme0n1 00:38:04.472 17:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:04.472 [ 00:38:04.472 { 00:38:04.472 "name": "Nvme0n1", 00:38:04.472 "aliases": [ 00:38:04.472 "8396b448-90cc-4c1b-a40d-ff5973b32b8d" 00:38:04.472 ], 00:38:04.472 "product_name": "NVMe disk", 00:38:04.472 "block_size": 4096, 00:38:04.472 "num_blocks": 38912, 00:38:04.472 "uuid": "8396b448-90cc-4c1b-a40d-ff5973b32b8d", 00:38:04.472 "numa_id": 0, 00:38:04.472 "assigned_rate_limits": { 00:38:04.472 "rw_ios_per_sec": 0, 00:38:04.472 "rw_mbytes_per_sec": 0, 00:38:04.472 "r_mbytes_per_sec": 0, 00:38:04.472 "w_mbytes_per_sec": 0 00:38:04.472 }, 00:38:04.472 "claimed": false, 00:38:04.472 "zoned": false, 00:38:04.472 "supported_io_types": { 00:38:04.472 "read": true, 00:38:04.472 "write": true, 00:38:04.472 "unmap": true, 00:38:04.472 "flush": true, 00:38:04.472 "reset": true, 00:38:04.472 "nvme_admin": true, 00:38:04.472 "nvme_io": true, 00:38:04.472 "nvme_io_md": false, 00:38:04.472 "write_zeroes": true, 00:38:04.472 "zcopy": false, 00:38:04.472 "get_zone_info": false, 00:38:04.472 "zone_management": false, 00:38:04.472 "zone_append": false, 00:38:04.472 "compare": true, 00:38:04.472 "compare_and_write": true, 00:38:04.472 "abort": true, 00:38:04.472 "seek_hole": false, 00:38:04.472 "seek_data": false, 00:38:04.472 "copy": true, 00:38:04.472 "nvme_iov_md": false 00:38:04.472 }, 00:38:04.472 "memory_domains": [ 00:38:04.472 { 00:38:04.472 "dma_device_id": "system", 00:38:04.472 "dma_device_type": 1 00:38:04.472 } 00:38:04.472 ], 00:38:04.472 "driver_specific": { 00:38:04.472 "nvme": [ 00:38:04.472 { 00:38:04.472 "trid": { 00:38:04.472 "trtype": "TCP", 00:38:04.472 "adrfam": "IPv4", 00:38:04.472 "traddr": "10.0.0.2", 00:38:04.472 "trsvcid": "4420", 00:38:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:04.472 }, 00:38:04.472 "ctrlr_data": { 00:38:04.472 "cntlid": 1, 00:38:04.472 "vendor_id": "0x8086", 00:38:04.472 "model_number": "SPDK bdev Controller", 00:38:04.472 "serial_number": "SPDK0", 00:38:04.472 "firmware_revision": "25.01", 00:38:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.472 "oacs": { 00:38:04.472 "security": 0, 00:38:04.472 "format": 0, 00:38:04.472 "firmware": 0, 00:38:04.472 "ns_manage": 0 00:38:04.472 }, 00:38:04.472 "multi_ctrlr": true, 00:38:04.472 "ana_reporting": false 00:38:04.472 }, 00:38:04.472 "vs": { 00:38:04.472 "nvme_version": "1.3" 00:38:04.472 }, 00:38:04.472 "ns_data": { 00:38:04.472 "id": 1, 00:38:04.472 "can_share": true 00:38:04.472 } 00:38:04.472 } 00:38:04.472 ], 00:38:04.472 "mp_policy": "active_passive" 00:38:04.472 } 00:38:04.472 } 00:38:04.472 ] 00:38:04.733 17:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3410775 00:38:04.733 17:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:04.733 17:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:04.733 Running I/O for 10 seconds... 00:38:05.672 Latency(us) 00:38:05.672 [2024-11-05T16:02:12.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.672 Nvme0n1 : 1.00 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:38:05.672 [2024-11-05T16:02:12.735Z] =================================================================================================================== 00:38:05.672 [2024-11-05T16:02:12.735Z] Total : 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:38:05.672 00:38:06.613 17:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:06.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.613 Nvme0n1 : 2.00 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:38:06.613 [2024-11-05T16:02:13.677Z] =================================================================================================================== 00:38:06.614 [2024-11-05T16:02:13.677Z] Total : 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:38:06.614 00:38:06.874 true 00:38:06.874 17:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:06.874 17:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:06.874 17:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:06.874 17:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:06.874 17:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3410775 00:38:07.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.815 Nvme0n1 : 3.00 17825.67 69.63 0.00 0.00 0.00 0.00 0.00 00:38:07.815 [2024-11-05T16:02:14.878Z] =================================================================================================================== 00:38:07.815 [2024-11-05T16:02:14.878Z] Total : 17825.67 69.63 0.00 0.00 0.00 0.00 0.00 00:38:07.815 00:38:08.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.754 Nvme0n1 : 4.00 17877.75 69.83 0.00 0.00 0.00 0.00 0.00 00:38:08.754 [2024-11-05T16:02:15.817Z] =================================================================================================================== 00:38:08.754 [2024-11-05T16:02:15.817Z] Total : 17877.75 69.83 0.00 0.00 0.00 0.00 0.00 00:38:08.754 00:38:09.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.694 Nvme0n1 : 5.00 17909.00 69.96 0.00 0.00 0.00 0.00 0.00 00:38:09.694 [2024-11-05T16:02:16.757Z] =================================================================================================================== 00:38:09.694 [2024-11-05T16:02:16.757Z] Total : 17909.00 69.96 0.00 0.00 0.00 0.00 0.00 00:38:09.694 00:38:10.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.636 Nvme0n1 : 6.00 17929.83 70.04 0.00 0.00 0.00 0.00 0.00 00:38:10.636 [2024-11-05T16:02:17.699Z] =================================================================================================================== 00:38:10.636 [2024-11-05T16:02:17.699Z] Total : 17929.83 70.04 0.00 0.00 0.00 0.00 0.00 00:38:10.636 00:38:12.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.020 Nvme0n1 : 7.00 17944.71 70.10 0.00 0.00 0.00 0.00 0.00 00:38:12.020 [2024-11-05T16:02:19.083Z] =================================================================================================================== 00:38:12.020 [2024-11-05T16:02:19.083Z] Total : 17944.71 70.10 0.00 0.00 0.00 0.00 0.00 00:38:12.020 00:38:12.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.591 Nvme0n1 : 8.00 17955.88 70.14 0.00 0.00 0.00 0.00 0.00 00:38:12.591 [2024-11-05T16:02:19.654Z] =================================================================================================================== 00:38:12.591 [2024-11-05T16:02:19.654Z] Total : 17955.88 70.14 0.00 0.00 0.00 0.00 0.00 00:38:12.591 00:38:13.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.975 Nvme0n1 : 9.00 17964.56 70.17 0.00 0.00 0.00 0.00 0.00 00:38:13.975 [2024-11-05T16:02:21.038Z] =================================================================================================================== 00:38:13.975 [2024-11-05T16:02:21.038Z] Total : 17964.56 70.17 0.00 0.00 0.00 0.00 0.00 00:38:13.975 00:38:14.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.918 Nvme0n1 : 10.00 17977.90 70.23 0.00 0.00 0.00 0.00 0.00 00:38:14.918 [2024-11-05T16:02:21.981Z] =================================================================================================================== 00:38:14.918 [2024-11-05T16:02:21.981Z] Total : 17977.90 70.23 0.00 0.00 0.00 0.00 0.00 00:38:14.918 00:38:14.918 00:38:14.918 Latency(us) 00:38:14.918 [2024-11-05T16:02:21.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.918 Nvme0n1 : 10.01 17983.26 70.25 0.00 0.00 7113.68 2580.48 13707.95 00:38:14.918 [2024-11-05T16:02:21.981Z] =================================================================================================================== 00:38:14.918 [2024-11-05T16:02:21.981Z] Total : 17983.26 70.25 0.00 0.00 7113.68 2580.48 13707.95 00:38:14.918 { 00:38:14.918 "results": [ 00:38:14.918 { 00:38:14.918 "job": "Nvme0n1", 00:38:14.918 "core_mask": "0x2", 00:38:14.918 "workload": "randwrite", 00:38:14.918 "status": "finished", 00:38:14.918 "queue_depth": 128, 00:38:14.918 "io_size": 4096, 00:38:14.918 "runtime": 10.00764, 00:38:14.918 "iops": 17983.26078875739, 00:38:14.918 "mibps": 70.24711245608356, 00:38:14.918 "io_failed": 0, 00:38:14.918 "io_timeout": 0, 00:38:14.918 "avg_latency_us": 7113.679655349966, 00:38:14.918 "min_latency_us": 2580.48, 00:38:14.918 "max_latency_us": 13707.946666666667 00:38:14.918 } 00:38:14.918 ], 00:38:14.918 "core_count": 1 00:38:14.918 } 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3410534 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3410534 ']' 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3410534 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3410534 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3410534' 00:38:14.918 killing process with pid 3410534 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3410534 00:38:14.918 Received shutdown signal, test time was about 10.000000 seconds 00:38:14.918 00:38:14.918 Latency(us) 00:38:14.918 [2024-11-05T16:02:21.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.918 [2024-11-05T16:02:21.981Z] =================================================================================================================== 00:38:14.918 [2024-11-05T16:02:21.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3410534 00:38:14.918 17:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:15.180 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:15.180 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:15.180 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:15.441 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:15.441 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:15.441 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:15.702 [2024-11-05 17:02:22.566968] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:15.702 request: 00:38:15.702 { 00:38:15.702 "uuid": "0b0273f4-610a-4267-9215-9e18b8d97ecb", 00:38:15.702 "method": "bdev_lvol_get_lvstores", 00:38:15.702 "req_id": 1 00:38:15.702 } 00:38:15.702 Got JSON-RPC error response 00:38:15.702 response: 00:38:15.702 { 00:38:15.702 "code": -19, 00:38:15.702 "message": "No such device" 00:38:15.702 } 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:15.702 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:15.964 aio_bdev 00:38:15.964 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8396b448-90cc-4c1b-a40d-ff5973b32b8d 00:38:15.964 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=8396b448-90cc-4c1b-a40d-ff5973b32b8d 00:38:15.964 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:38:15.965 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:38:15.965 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:38:15.965 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:38:15.965 17:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:16.226 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8396b448-90cc-4c1b-a40d-ff5973b32b8d -t 2000 00:38:16.226 [ 00:38:16.226 { 00:38:16.226 "name": "8396b448-90cc-4c1b-a40d-ff5973b32b8d", 00:38:16.226 "aliases": [ 00:38:16.226 "lvs/lvol" 00:38:16.226 ], 00:38:16.226 "product_name": "Logical Volume", 00:38:16.226 "block_size": 4096, 00:38:16.226 "num_blocks": 38912, 00:38:16.226 "uuid": "8396b448-90cc-4c1b-a40d-ff5973b32b8d", 00:38:16.226 "assigned_rate_limits": { 00:38:16.226 "rw_ios_per_sec": 0, 00:38:16.226 "rw_mbytes_per_sec": 0, 00:38:16.226 "r_mbytes_per_sec": 0, 00:38:16.226 "w_mbytes_per_sec": 0 00:38:16.226 }, 00:38:16.226 "claimed": false, 00:38:16.226 "zoned": false, 00:38:16.226 "supported_io_types": { 00:38:16.226 "read": true, 00:38:16.226 "write": true, 00:38:16.226 "unmap": true, 00:38:16.226 "flush": false, 00:38:16.226 "reset": true, 00:38:16.226 "nvme_admin": false, 00:38:16.226 "nvme_io": false, 00:38:16.226 "nvme_io_md": false, 00:38:16.226 "write_zeroes": true, 00:38:16.226 "zcopy": false, 00:38:16.226 "get_zone_info": false, 00:38:16.226 "zone_management": false, 00:38:16.226 "zone_append": false, 00:38:16.226 "compare": false, 00:38:16.226 "compare_and_write": false, 00:38:16.226 "abort": false, 00:38:16.226 "seek_hole": true, 00:38:16.226 "seek_data": true, 00:38:16.226 "copy": false, 00:38:16.226 "nvme_iov_md": false 00:38:16.226 }, 00:38:16.226 "driver_specific": { 00:38:16.226 "lvol": { 00:38:16.226 "lvol_store_uuid": "0b0273f4-610a-4267-9215-9e18b8d97ecb", 00:38:16.226 "base_bdev": "aio_bdev", 00:38:16.226 "thin_provision": false, 00:38:16.226 "num_allocated_clusters": 38, 00:38:16.226 "snapshot": false, 00:38:16.226 "clone": false, 00:38:16.226 "esnap_clone": false 00:38:16.226 } 00:38:16.226 } 00:38:16.226 } 00:38:16.226 ] 00:38:16.226 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:38:16.226 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:16.226 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:16.487 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:16.487 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:16.487 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:16.822 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:16.822 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8396b448-90cc-4c1b-a40d-ff5973b32b8d 00:38:16.822 17:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b0273f4-610a-4267-9215-9e18b8d97ecb 00:38:17.118 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:17.380 00:38:17.380 real 0m15.876s 00:38:17.380 user 0m15.630s 00:38:17.380 sys 0m1.397s 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:17.380 ************************************ 00:38:17.380 END TEST lvs_grow_clean 00:38:17.380 ************************************ 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.380 ************************************ 00:38:17.380 START TEST lvs_grow_dirty 00:38:17.380 ************************************ 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:17.380 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:17.641 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:17.641 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:17.641 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba2df191-21c0-423a-975e-aff0a028a1de 00:38:17.641 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:17.641 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:17.903 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:17.903 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:17.903 17:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ba2df191-21c0-423a-975e-aff0a028a1de lvol 150 00:38:18.165 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5b5893de-ef37-4b79-8af5-24df502800b8 00:38:18.165 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.165 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:18.165 [2024-11-05 17:02:25.222896] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:18.165 [2024-11-05 17:02:25.222967] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:18.165 true 00:38:18.426 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:18.426 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:18.426 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:18.426 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:18.687 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b5893de-ef37-4b79-8af5-24df502800b8 00:38:18.949 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:18.949 [2024-11-05 17:02:25.927242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:18.949 17:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3413616 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3413616 /var/tmp/bdevperf.sock 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3413616 ']' 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:19.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:19.210 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:19.210 [2024-11-05 17:02:26.165223] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:19.210 [2024-11-05 17:02:26.165300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413616 ] 00:38:19.210 [2024-11-05 17:02:26.253959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.471 [2024-11-05 17:02:26.288189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.044 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:20.044 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:38:20.044 17:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:20.306 Nvme0n1 00:38:20.306 17:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:20.568 [ 00:38:20.568 { 00:38:20.568 "name": "Nvme0n1", 00:38:20.568 "aliases": [ 00:38:20.568 "5b5893de-ef37-4b79-8af5-24df502800b8" 00:38:20.568 ], 00:38:20.568 "product_name": "NVMe disk", 00:38:20.568 "block_size": 4096, 00:38:20.568 "num_blocks": 38912, 00:38:20.568 "uuid": "5b5893de-ef37-4b79-8af5-24df502800b8", 00:38:20.568 "numa_id": 0, 00:38:20.568 "assigned_rate_limits": { 00:38:20.568 "rw_ios_per_sec": 0, 00:38:20.568 "rw_mbytes_per_sec": 0, 00:38:20.568 "r_mbytes_per_sec": 0, 00:38:20.568 "w_mbytes_per_sec": 0 00:38:20.568 }, 00:38:20.568 "claimed": false, 00:38:20.568 "zoned": false, 00:38:20.568 "supported_io_types": { 00:38:20.568 "read": true, 00:38:20.568 "write": true, 00:38:20.568 "unmap": true, 00:38:20.568 "flush": true, 00:38:20.568 "reset": true, 00:38:20.568 "nvme_admin": true, 00:38:20.568 "nvme_io": true, 00:38:20.568 "nvme_io_md": false, 00:38:20.568 "write_zeroes": true, 00:38:20.568 "zcopy": false, 00:38:20.568 "get_zone_info": false, 00:38:20.568 "zone_management": false, 00:38:20.568 "zone_append": false, 00:38:20.568 "compare": true, 00:38:20.568 "compare_and_write": true, 00:38:20.568 "abort": true, 00:38:20.568 "seek_hole": false, 00:38:20.568 "seek_data": false, 00:38:20.568 "copy": true, 00:38:20.568 "nvme_iov_md": false 00:38:20.568 }, 00:38:20.568 "memory_domains": [ 00:38:20.568 { 00:38:20.568 "dma_device_id": "system", 00:38:20.568 "dma_device_type": 1 00:38:20.568 } 00:38:20.568 ], 00:38:20.568 "driver_specific": { 00:38:20.568 "nvme": [ 00:38:20.568 { 00:38:20.568 "trid": { 00:38:20.568 "trtype": "TCP", 00:38:20.568 "adrfam": "IPv4", 00:38:20.568 "traddr": "10.0.0.2", 00:38:20.568 "trsvcid": "4420", 00:38:20.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:20.568 }, 00:38:20.568 "ctrlr_data": { 00:38:20.568 "cntlid": 1, 00:38:20.568 "vendor_id": "0x8086", 00:38:20.568 "model_number": "SPDK bdev Controller", 00:38:20.568 "serial_number": "SPDK0", 00:38:20.568 "firmware_revision": "25.01", 00:38:20.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:20.568 "oacs": { 00:38:20.568 "security": 0, 00:38:20.568 "format": 0, 00:38:20.568 "firmware": 0, 00:38:20.568 "ns_manage": 0 00:38:20.568 }, 00:38:20.568 "multi_ctrlr": true, 00:38:20.568 "ana_reporting": false 00:38:20.568 }, 00:38:20.568 "vs": { 00:38:20.568 "nvme_version": "1.3" 00:38:20.568 }, 00:38:20.568 "ns_data": { 00:38:20.568 "id": 1, 00:38:20.568 "can_share": true 00:38:20.568 } 00:38:20.568 } 00:38:20.568 ], 00:38:20.568 "mp_policy": "active_passive" 00:38:20.568 } 00:38:20.568 } 00:38:20.568 ] 00:38:20.568 17:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3413803 00:38:20.568 17:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:20.569 17:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:20.569 Running I/O for 10 seconds... 00:38:21.513 Latency(us) 00:38:21.513 [2024-11-05T16:02:28.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.513 Nvme0n1 : 1.00 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:38:21.513 [2024-11-05T16:02:28.576Z] =================================================================================================================== 00:38:21.513 [2024-11-05T16:02:28.576Z] Total : 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:38:21.513 00:38:22.458 17:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:22.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.458 Nvme0n1 : 2.00 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:38:22.458 [2024-11-05T16:02:29.521Z] =================================================================================================================== 00:38:22.458 [2024-11-05T16:02:29.521Z] Total : 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:38:22.458 00:38:22.718 true 00:38:22.718 17:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:22.718 17:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:22.979 17:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:22.979 17:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:22.979 17:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3413803 00:38:23.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.552 Nvme0n1 : 3.00 17889.33 69.88 0.00 0.00 0.00 0.00 0.00 00:38:23.552 [2024-11-05T16:02:30.615Z] =================================================================================================================== 00:38:23.552 [2024-11-05T16:02:30.615Z] Total : 17889.33 69.88 0.00 0.00 0.00 0.00 0.00 00:38:23.552 00:38:24.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.496 Nvme0n1 : 4.00 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:38:24.497 [2024-11-05T16:02:31.560Z] =================================================================================================================== 00:38:24.497 [2024-11-05T16:02:31.560Z] Total : 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:38:24.497 00:38:25.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.882 Nvme0n1 : 5.00 17947.20 70.11 0.00 0.00 0.00 0.00 0.00 00:38:25.882 [2024-11-05T16:02:32.945Z] =================================================================================================================== 00:38:25.882 [2024-11-05T16:02:32.945Z] Total : 17947.20 70.11 0.00 0.00 0.00 0.00 0.00 00:38:25.882 00:38:26.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.823 Nvme0n1 : 6.00 17972.17 70.20 0.00 0.00 0.00 0.00 0.00 00:38:26.823 [2024-11-05T16:02:33.886Z] =================================================================================================================== 00:38:26.823 [2024-11-05T16:02:33.886Z] Total : 17972.17 70.20 0.00 0.00 0.00 0.00 0.00 00:38:26.823 00:38:27.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.764 Nvme0n1 : 7.00 17981.00 70.24 0.00 0.00 0.00 0.00 0.00 00:38:27.764 [2024-11-05T16:02:34.827Z] =================================================================================================================== 00:38:27.764 [2024-11-05T16:02:34.827Z] Total : 17981.00 70.24 0.00 0.00 0.00 0.00 0.00 00:38:27.764 00:38:28.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.705 Nvme0n1 : 8.00 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:38:28.705 [2024-11-05T16:02:35.768Z] =================================================================================================================== 00:38:28.705 [2024-11-05T16:02:35.768Z] Total : 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:38:28.705 00:38:29.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.646 Nvme0n1 : 9.00 18006.89 70.34 0.00 0.00 0.00 0.00 0.00 00:38:29.646 [2024-11-05T16:02:36.709Z] =================================================================================================================== 00:38:29.646 [2024-11-05T16:02:36.709Z] Total : 18006.89 70.34 0.00 0.00 0.00 0.00 0.00 00:38:29.646 00:38:30.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.587 Nvme0n1 : 10.00 18022.30 70.40 0.00 0.00 0.00 0.00 0.00 00:38:30.587 [2024-11-05T16:02:37.650Z] =================================================================================================================== 00:38:30.587 [2024-11-05T16:02:37.650Z] Total : 18022.30 70.40 0.00 0.00 0.00 0.00 0.00 00:38:30.587 00:38:30.587 00:38:30.587 Latency(us) 00:38:30.587 [2024-11-05T16:02:37.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.587 Nvme0n1 : 10.01 18024.23 70.41 0.00 0.00 7098.04 1693.01 13107.20 00:38:30.587 [2024-11-05T16:02:37.650Z] =================================================================================================================== 00:38:30.587 [2024-11-05T16:02:37.650Z] Total : 18024.23 70.41 0.00 0.00 7098.04 1693.01 13107.20 00:38:30.587 { 00:38:30.587 "results": [ 00:38:30.587 { 00:38:30.587 "job": "Nvme0n1", 00:38:30.587 "core_mask": "0x2", 00:38:30.587 "workload": "randwrite", 00:38:30.587 "status": "finished", 00:38:30.587 "queue_depth": 128, 00:38:30.587 "io_size": 4096, 00:38:30.587 "runtime": 10.006031, 00:38:30.587 "iops": 18024.229587135997, 00:38:30.587 "mibps": 70.40714682474999, 00:38:30.587 "io_failed": 0, 00:38:30.587 "io_timeout": 0, 00:38:30.587 "avg_latency_us": 7098.04079496833, 00:38:30.587 "min_latency_us": 1693.0133333333333, 00:38:30.587 "max_latency_us": 13107.2 00:38:30.587 } 00:38:30.587 ], 00:38:30.587 "core_count": 1 00:38:30.587 } 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3413616 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3413616 ']' 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3413616 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3413616 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3413616' 00:38:30.587 killing process with pid 3413616 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3413616 00:38:30.587 Received shutdown signal, test time was about 10.000000 seconds 00:38:30.587 00:38:30.587 Latency(us) 00:38:30.587 [2024-11-05T16:02:37.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.587 [2024-11-05T16:02:37.650Z] =================================================================================================================== 00:38:30.587 [2024-11-05T16:02:37.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:30.587 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3413616 00:38:30.848 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:30.848 17:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:31.108 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:31.108 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3409835 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3409835 00:38:31.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3409835 Killed "${NVMF_APP[@]}" "$@" 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=3415940 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 3415940 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3415940 ']' 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:31.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:31.369 17:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:31.369 [2024-11-05 17:02:38.418793] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:31.369 [2024-11-05 17:02:38.420102] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:31.369 [2024-11-05 17:02:38.420166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:31.629 [2024-11-05 17:02:38.497582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.629 [2024-11-05 17:02:38.532036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:31.629 [2024-11-05 17:02:38.532071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:31.629 [2024-11-05 17:02:38.532079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:31.629 [2024-11-05 17:02:38.532086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:31.629 [2024-11-05 17:02:38.532091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:31.629 [2024-11-05 17:02:38.532642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.629 [2024-11-05 17:02:38.586671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:31.629 [2024-11-05 17:02:38.586931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:32.198 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:32.459 [2024-11-05 17:02:39.391867] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:32.459 [2024-11-05 17:02:39.391983] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:32.459 [2024-11-05 17:02:39.392016] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5b5893de-ef37-4b79-8af5-24df502800b8 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=5b5893de-ef37-4b79-8af5-24df502800b8 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:38:32.459 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:32.719 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5b5893de-ef37-4b79-8af5-24df502800b8 -t 2000 00:38:32.719 [ 00:38:32.719 { 00:38:32.719 "name": "5b5893de-ef37-4b79-8af5-24df502800b8", 00:38:32.719 "aliases": [ 00:38:32.719 "lvs/lvol" 00:38:32.719 ], 00:38:32.719 "product_name": "Logical Volume", 00:38:32.719 "block_size": 4096, 00:38:32.719 "num_blocks": 38912, 00:38:32.719 "uuid": "5b5893de-ef37-4b79-8af5-24df502800b8", 00:38:32.719 "assigned_rate_limits": { 00:38:32.719 "rw_ios_per_sec": 0, 00:38:32.719 "rw_mbytes_per_sec": 0, 00:38:32.719 "r_mbytes_per_sec": 0, 00:38:32.719 "w_mbytes_per_sec": 0 00:38:32.719 }, 00:38:32.719 "claimed": false, 00:38:32.719 "zoned": false, 00:38:32.719 "supported_io_types": { 00:38:32.719 "read": true, 00:38:32.719 "write": true, 00:38:32.719 "unmap": true, 00:38:32.719 "flush": false, 00:38:32.719 "reset": true, 00:38:32.719 "nvme_admin": false, 00:38:32.719 "nvme_io": false, 00:38:32.719 "nvme_io_md": false, 00:38:32.719 "write_zeroes": true, 00:38:32.719 "zcopy": false, 00:38:32.719 "get_zone_info": false, 00:38:32.719 "zone_management": false, 00:38:32.719 "zone_append": false, 00:38:32.719 "compare": false, 00:38:32.719 "compare_and_write": false, 00:38:32.719 "abort": false, 00:38:32.719 "seek_hole": true, 00:38:32.719 "seek_data": true, 00:38:32.719 "copy": false, 00:38:32.719 "nvme_iov_md": false 00:38:32.719 }, 00:38:32.719 "driver_specific": { 00:38:32.719 "lvol": { 00:38:32.719 "lvol_store_uuid": "ba2df191-21c0-423a-975e-aff0a028a1de", 00:38:32.719 "base_bdev": "aio_bdev", 00:38:32.719 "thin_provision": false, 00:38:32.719 "num_allocated_clusters": 38, 00:38:32.719 "snapshot": false, 00:38:32.719 "clone": false, 00:38:32.719 "esnap_clone": false 00:38:32.719 } 00:38:32.719 } 00:38:32.719 } 00:38:32.719 ] 00:38:32.719 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:38:32.719 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:32.719 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:32.980 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:32.980 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:32.980 17:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:33.241 [2024-11-05 17:02:40.245194] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:33.241 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:33.502 request: 00:38:33.502 { 00:38:33.502 "uuid": "ba2df191-21c0-423a-975e-aff0a028a1de", 00:38:33.502 "method": "bdev_lvol_get_lvstores", 00:38:33.502 "req_id": 1 00:38:33.502 } 00:38:33.502 Got JSON-RPC error response 00:38:33.502 response: 00:38:33.502 { 00:38:33.502 "code": -19, 00:38:33.502 "message": "No such device" 00:38:33.502 } 00:38:33.502 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:33.502 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:33.502 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:33.502 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:33.502 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:33.763 aio_bdev 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5b5893de-ef37-4b79-8af5-24df502800b8 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=5b5893de-ef37-4b79-8af5-24df502800b8 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:33.763 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5b5893de-ef37-4b79-8af5-24df502800b8 -t 2000 00:38:34.023 [ 00:38:34.023 { 00:38:34.023 "name": "5b5893de-ef37-4b79-8af5-24df502800b8", 00:38:34.023 "aliases": [ 00:38:34.023 "lvs/lvol" 00:38:34.023 ], 00:38:34.023 "product_name": "Logical Volume", 00:38:34.023 "block_size": 4096, 00:38:34.023 "num_blocks": 38912, 00:38:34.023 "uuid": "5b5893de-ef37-4b79-8af5-24df502800b8", 00:38:34.023 "assigned_rate_limits": { 00:38:34.024 "rw_ios_per_sec": 0, 00:38:34.024 "rw_mbytes_per_sec": 0, 00:38:34.024 "r_mbytes_per_sec": 0, 00:38:34.024 "w_mbytes_per_sec": 0 00:38:34.024 }, 00:38:34.024 "claimed": false, 00:38:34.024 "zoned": false, 00:38:34.024 "supported_io_types": { 00:38:34.024 "read": true, 00:38:34.024 "write": true, 00:38:34.024 "unmap": true, 00:38:34.024 "flush": false, 00:38:34.024 "reset": true, 00:38:34.024 "nvme_admin": false, 00:38:34.024 "nvme_io": false, 00:38:34.024 "nvme_io_md": false, 00:38:34.024 "write_zeroes": true, 00:38:34.024 "zcopy": false, 00:38:34.024 "get_zone_info": false, 00:38:34.024 "zone_management": false, 00:38:34.024 "zone_append": false, 00:38:34.024 "compare": false, 00:38:34.024 "compare_and_write": false, 00:38:34.024 "abort": false, 00:38:34.024 "seek_hole": true, 00:38:34.024 "seek_data": true, 00:38:34.024 "copy": false, 00:38:34.024 "nvme_iov_md": false 00:38:34.024 }, 00:38:34.024 "driver_specific": { 00:38:34.024 "lvol": { 00:38:34.024 "lvol_store_uuid": "ba2df191-21c0-423a-975e-aff0a028a1de", 00:38:34.024 "base_bdev": "aio_bdev", 00:38:34.024 "thin_provision": false, 00:38:34.024 "num_allocated_clusters": 38, 00:38:34.024 "snapshot": false, 00:38:34.024 "clone": false, 00:38:34.024 "esnap_clone": false 00:38:34.024 } 00:38:34.024 } 00:38:34.024 } 00:38:34.024 ] 00:38:34.024 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:38:34.024 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:34.024 17:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:34.284 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:34.284 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:34.284 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:34.284 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:34.284 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b5893de-ef37-4b79-8af5-24df502800b8 00:38:34.545 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba2df191-21c0-423a-975e-aff0a028a1de 00:38:34.805 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:35.066 00:38:35.066 real 0m17.635s 00:38:35.066 user 0m35.599s 00:38:35.066 sys 0m2.893s 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:35.066 ************************************ 00:38:35.066 END TEST lvs_grow_dirty 00:38:35.066 ************************************ 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:38:35.066 17:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:35.066 nvmf_trace.0 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:35.066 rmmod nvme_tcp 00:38:35.066 rmmod nvme_fabrics 00:38:35.066 rmmod nvme_keyring 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:38:35.066 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 3415940 ']' 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 3415940 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3415940 ']' 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3415940 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:35.067 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3415940 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3415940' 00:38:35.327 killing process with pid 3415940 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3415940 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3415940 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:35.327 17:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:38:37.871 00:38:37.871 real 0m44.807s 00:38:37.871 user 0m54.191s 00:38:37.871 sys 0m10.338s 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:37.871 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:37.872 ************************************ 00:38:37.872 END TEST nvmf_lvs_grow 00:38:37.872 ************************************ 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:37.872 ************************************ 00:38:37.872 START TEST nvmf_bdev_io_wait 00:38:37.872 ************************************ 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:37.872 * Looking for test storage... 00:38:37.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.872 --rc genhtml_branch_coverage=1 00:38:37.872 --rc genhtml_function_coverage=1 00:38:37.872 --rc genhtml_legend=1 00:38:37.872 --rc geninfo_all_blocks=1 00:38:37.872 --rc geninfo_unexecuted_blocks=1 00:38:37.872 00:38:37.872 ' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.872 --rc genhtml_branch_coverage=1 00:38:37.872 --rc genhtml_function_coverage=1 00:38:37.872 --rc genhtml_legend=1 00:38:37.872 --rc geninfo_all_blocks=1 00:38:37.872 --rc geninfo_unexecuted_blocks=1 00:38:37.872 00:38:37.872 ' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.872 --rc genhtml_branch_coverage=1 00:38:37.872 --rc genhtml_function_coverage=1 00:38:37.872 --rc genhtml_legend=1 00:38:37.872 --rc geninfo_all_blocks=1 00:38:37.872 --rc geninfo_unexecuted_blocks=1 00:38:37.872 00:38:37.872 ' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.872 --rc genhtml_branch_coverage=1 00:38:37.872 --rc genhtml_function_coverage=1 00:38:37.872 --rc genhtml_legend=1 00:38:37.872 --rc geninfo_all_blocks=1 00:38:37.872 --rc geninfo_unexecuted_blocks=1 00:38:37.872 00:38:37.872 ' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.872 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:38:37.873 17:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.014 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.014 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:38:46.014 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:46.014 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:46.014 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:46.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:46.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:46.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:46.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:46.015 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:46.016 10.0.0.1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:46.016 10.0.0.2 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:46.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.652 ms 00:38:46.016 00:38:46.016 --- 10.0.0.1 ping statistics --- 00:38:46.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.016 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:46.016 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:46.017 17:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:46.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:38:46.017 00:38:46.017 --- 10.0.0.2 ping statistics --- 00:38:46.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.017 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:46.017 ' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=3420734 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 3420734 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3420734 ']' 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.017 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.018 [2024-11-05 17:02:52.177055] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:46.018 [2024-11-05 17:02:52.178076] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:46.018 [2024-11-05 17:02:52.178121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:46.018 [2024-11-05 17:02:52.261232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:46.018 [2024-11-05 17:02:52.302861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:46.018 [2024-11-05 17:02:52.302900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:46.018 [2024-11-05 17:02:52.302908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:46.018 [2024-11-05 17:02:52.302915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:46.018 [2024-11-05 17:02:52.302921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:46.018 [2024-11-05 17:02:52.304468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:46.018 [2024-11-05 17:02:52.304582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:46.018 [2024-11-05 17:02:52.304738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:46.018 [2024-11-05 17:02:52.304738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.018 [2024-11-05 17:02:52.305230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.018 17:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.018 [2024-11-05 17:02:53.038782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:46.018 [2024-11-05 17:02:53.039360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:46.018 [2024-11-05 17:02:53.039861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:46.018 [2024-11-05 17:02:53.040107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.018 [2024-11-05 17:02:53.049438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.018 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.280 Malloc0 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.280 [2024-11-05 17:02:53.113585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3421078 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3421080 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:46.280 { 00:38:46.280 "params": { 00:38:46.280 "name": "Nvme$subsystem", 00:38:46.280 "trtype": "$TEST_TRANSPORT", 00:38:46.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.280 "adrfam": "ipv4", 00:38:46.280 "trsvcid": "$NVMF_PORT", 00:38:46.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.280 "hdgst": ${hdgst:-false}, 00:38:46.280 "ddgst": ${ddgst:-false} 00:38:46.280 }, 00:38:46.280 "method": "bdev_nvme_attach_controller" 00:38:46.280 } 00:38:46.280 EOF 00:38:46.280 )") 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3421082 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:46.280 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:46.280 { 00:38:46.280 "params": { 00:38:46.280 "name": "Nvme$subsystem", 00:38:46.280 "trtype": "$TEST_TRANSPORT", 00:38:46.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.280 "adrfam": "ipv4", 00:38:46.280 "trsvcid": "$NVMF_PORT", 00:38:46.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.280 "hdgst": ${hdgst:-false}, 00:38:46.280 "ddgst": ${ddgst:-false} 00:38:46.280 }, 00:38:46.280 "method": "bdev_nvme_attach_controller" 00:38:46.280 } 00:38:46.281 EOF 00:38:46.281 )") 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3421085 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:46.281 { 00:38:46.281 "params": { 00:38:46.281 "name": "Nvme$subsystem", 00:38:46.281 "trtype": "$TEST_TRANSPORT", 00:38:46.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.281 "adrfam": "ipv4", 00:38:46.281 "trsvcid": "$NVMF_PORT", 00:38:46.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.281 "hdgst": ${hdgst:-false}, 00:38:46.281 "ddgst": ${ddgst:-false} 00:38:46.281 }, 00:38:46.281 "method": "bdev_nvme_attach_controller" 00:38:46.281 } 00:38:46.281 EOF 00:38:46.281 )") 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:46.281 { 00:38:46.281 "params": { 00:38:46.281 "name": "Nvme$subsystem", 00:38:46.281 "trtype": "$TEST_TRANSPORT", 00:38:46.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.281 "adrfam": "ipv4", 00:38:46.281 "trsvcid": "$NVMF_PORT", 00:38:46.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.281 "hdgst": ${hdgst:-false}, 00:38:46.281 "ddgst": ${ddgst:-false} 00:38:46.281 }, 00:38:46.281 "method": "bdev_nvme_attach_controller" 00:38:46.281 } 00:38:46.281 EOF 00:38:46.281 )") 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3421078 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:46.281 "params": { 00:38:46.281 "name": "Nvme1", 00:38:46.281 "trtype": "tcp", 00:38:46.281 "traddr": "10.0.0.2", 00:38:46.281 "adrfam": "ipv4", 00:38:46.281 "trsvcid": "4420", 00:38:46.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.281 "hdgst": false, 00:38:46.281 "ddgst": false 00:38:46.281 }, 00:38:46.281 "method": "bdev_nvme_attach_controller" 00:38:46.281 }' 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:46.281 "params": { 00:38:46.281 "name": "Nvme1", 00:38:46.281 "trtype": "tcp", 00:38:46.281 "traddr": "10.0.0.2", 00:38:46.281 "adrfam": "ipv4", 00:38:46.281 "trsvcid": "4420", 00:38:46.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.281 "hdgst": false, 00:38:46.281 "ddgst": false 00:38:46.281 }, 00:38:46.281 "method": "bdev_nvme_attach_controller" 00:38:46.281 }' 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:46.281 "params": { 00:38:46.281 "name": "Nvme1", 00:38:46.281 "trtype": "tcp", 00:38:46.281 "traddr": "10.0.0.2", 00:38:46.281 "adrfam": "ipv4", 00:38:46.281 "trsvcid": "4420", 00:38:46.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.281 "hdgst": false, 00:38:46.281 "ddgst": false 00:38:46.281 }, 00:38:46.281 "method": "bdev_nvme_attach_controller" 00:38:46.281 }' 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:38:46.281 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:46.281 "params": { 00:38:46.281 "name": "Nvme1", 00:38:46.281 "trtype": "tcp", 00:38:46.281 "traddr": "10.0.0.2", 00:38:46.281 "adrfam": "ipv4", 00:38:46.281 "trsvcid": "4420", 00:38:46.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.281 "hdgst": false, 00:38:46.281 "ddgst": false 00:38:46.281 }, 00:38:46.281 "method": "bdev_nvme_attach_controller" 00:38:46.281 }' 00:38:46.281 [2024-11-05 17:02:53.169622] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:46.281 [2024-11-05 17:02:53.169678] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:46.281 [2024-11-05 17:02:53.170967] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:46.281 [2024-11-05 17:02:53.171016] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:46.281 [2024-11-05 17:02:53.172421] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:46.281 [2024-11-05 17:02:53.172470] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:46.281 [2024-11-05 17:02:53.172804] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:46.281 [2024-11-05 17:02:53.172850] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:46.281 [2024-11-05 17:02:53.325906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.542 [2024-11-05 17:02:53.355181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:46.542 [2024-11-05 17:02:53.385953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.542 [2024-11-05 17:02:53.415319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:46.542 [2024-11-05 17:02:53.430451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.542 [2024-11-05 17:02:53.459452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:46.542 [2024-11-05 17:02:53.479665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.542 [2024-11-05 17:02:53.507548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:46.542 Running I/O for 1 seconds... 00:38:46.542 Running I/O for 1 seconds... 00:38:46.802 Running I/O for 1 seconds... 00:38:46.802 Running I/O for 1 seconds... 00:38:47.744 14325.00 IOPS, 55.96 MiB/s 00:38:47.744 Latency(us) 00:38:47.744 [2024-11-05T16:02:54.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.744 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:47.744 Nvme1n1 : 1.01 14366.53 56.12 0.00 0.00 8882.46 4669.44 11851.09 00:38:47.744 [2024-11-05T16:02:54.807Z] =================================================================================================================== 00:38:47.744 [2024-11-05T16:02:54.807Z] Total : 14366.53 56.12 0.00 0.00 8882.46 4669.44 11851.09 00:38:47.744 8189.00 IOPS, 31.99 MiB/s 00:38:47.744 Latency(us) 00:38:47.744 [2024-11-05T16:02:54.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.744 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:47.744 Nvme1n1 : 1.02 8187.57 31.98 0.00 0.00 15485.69 2088.96 26432.85 00:38:47.744 [2024-11-05T16:02:54.807Z] =================================================================================================================== 00:38:47.744 [2024-11-05T16:02:54.807Z] Total : 8187.57 31.98 0.00 0.00 15485.69 2088.96 26432.85 00:38:47.744 188856.00 IOPS, 737.72 MiB/s 00:38:47.744 Latency(us) 00:38:47.744 [2024-11-05T16:02:54.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.744 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:47.744 Nvme1n1 : 1.00 188480.44 736.25 0.00 0.00 675.39 302.08 1966.08 00:38:47.744 [2024-11-05T16:02:54.807Z] =================================================================================================================== 00:38:47.744 [2024-11-05T16:02:54.807Z] Total : 188480.44 736.25 0.00 0.00 675.39 302.08 1966.08 00:38:47.744 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3421080 00:38:47.744 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3421082 00:38:47.744 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3421085 00:38:47.744 8804.00 IOPS, 34.39 MiB/s 00:38:47.744 Latency(us) 00:38:47.744 [2024-11-05T16:02:54.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.744 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:47.744 Nvme1n1 : 1.01 8924.32 34.86 0.00 0.00 14305.64 3686.40 32768.00 00:38:47.744 [2024-11-05T16:02:54.807Z] =================================================================================================================== 00:38:47.744 [2024-11-05T16:02:54.807Z] Total : 8924.32 34.86 0.00 0.00 14305.64 3686.40 32768.00 00:38:48.004 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:48.004 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.004 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.004 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:48.005 rmmod nvme_tcp 00:38:48.005 rmmod nvme_fabrics 00:38:48.005 rmmod nvme_keyring 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 3420734 ']' 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 3420734 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3420734 ']' 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3420734 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3420734 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3420734' 00:38:48.005 killing process with pid 3420734 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3420734 00:38:48.005 17:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3420734 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:48.265 17:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:50.177 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:50.177 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:50.177 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:38:50.177 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:50.177 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:50.177 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:38:50.178 00:38:50.178 real 0m12.742s 00:38:50.178 user 0m14.959s 00:38:50.178 sys 0m7.171s 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:50.178 ************************************ 00:38:50.178 END TEST nvmf_bdev_io_wait 00:38:50.178 ************************************ 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:50.178 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:50.439 ************************************ 00:38:50.439 START TEST nvmf_queue_depth 00:38:50.439 ************************************ 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:50.439 * Looking for test storage... 00:38:50.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.439 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.440 --rc genhtml_branch_coverage=1 00:38:50.440 --rc genhtml_function_coverage=1 00:38:50.440 --rc genhtml_legend=1 00:38:50.440 --rc geninfo_all_blocks=1 00:38:50.440 --rc geninfo_unexecuted_blocks=1 00:38:50.440 00:38:50.440 ' 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.440 --rc genhtml_branch_coverage=1 00:38:50.440 --rc genhtml_function_coverage=1 00:38:50.440 --rc genhtml_legend=1 00:38:50.440 --rc geninfo_all_blocks=1 00:38:50.440 --rc geninfo_unexecuted_blocks=1 00:38:50.440 00:38:50.440 ' 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.440 --rc genhtml_branch_coverage=1 00:38:50.440 --rc genhtml_function_coverage=1 00:38:50.440 --rc genhtml_legend=1 00:38:50.440 --rc geninfo_all_blocks=1 00:38:50.440 --rc geninfo_unexecuted_blocks=1 00:38:50.440 00:38:50.440 ' 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.440 --rc genhtml_branch_coverage=1 00:38:50.440 --rc genhtml_function_coverage=1 00:38:50.440 --rc genhtml_legend=1 00:38:50.440 --rc geninfo_all_blocks=1 00:38:50.440 --rc geninfo_unexecuted_blocks=1 00:38:50.440 00:38:50.440 ' 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.440 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:50.701 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:38:50.702 17:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.843 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:58.844 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:58.844 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:58.844 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:58.844 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:58.844 10.0.0.1 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:58.844 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:58.845 10.0.0.2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:58.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:58.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.685 ms 00:38:58.845 00:38:58.845 --- 10.0.0.1 ping statistics --- 00:38:58.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.845 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:58.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:58.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:38:58.845 00:38:58.845 --- 10.0.0.2 ping statistics --- 00:38:58.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.845 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:58.845 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:58.846 ' 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:58.846 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=3425639 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 3425639 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3425639 ']' 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.846 [2024-11-05 17:03:05.082429] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:58.846 [2024-11-05 17:03:05.083523] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:58.846 [2024-11-05 17:03:05.083579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:58.846 [2024-11-05 17:03:05.187669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.846 [2024-11-05 17:03:05.238720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:58.846 [2024-11-05 17:03:05.238786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:58.846 [2024-11-05 17:03:05.238795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:58.846 [2024-11-05 17:03:05.238802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:58.846 [2024-11-05 17:03:05.238808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:58.846 [2024-11-05 17:03:05.239580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.846 [2024-11-05 17:03:05.316395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:58.846 [2024-11-05 17:03:05.316689] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:58.846 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:58.847 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:38:58.847 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:58.847 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:58.847 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 [2024-11-05 17:03:05.948430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.108 17:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 Malloc0 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 [2024-11-05 17:03:06.036500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3425929 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3425929 /var/tmp/bdevperf.sock 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3425929 ']' 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:59.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:59.108 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.108 [2024-11-05 17:03:06.094191] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:38:59.108 [2024-11-05 17:03:06.094257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425929 ] 00:38:59.108 [2024-11-05 17:03:06.169975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.369 [2024-11-05 17:03:06.211522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.940 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:59.940 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:38:59.940 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:59.940 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.940 17:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:00.201 NVMe0n1 00:39:00.201 17:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.201 17:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:00.201 Running I/O for 10 seconds... 00:39:02.523 8234.00 IOPS, 32.16 MiB/s [2024-11-05T16:03:10.526Z] 8711.00 IOPS, 34.03 MiB/s [2024-11-05T16:03:11.468Z] 8874.67 IOPS, 34.67 MiB/s [2024-11-05T16:03:12.410Z] 9700.00 IOPS, 37.89 MiB/s [2024-11-05T16:03:13.353Z] 10126.80 IOPS, 39.56 MiB/s [2024-11-05T16:03:14.294Z] 10428.83 IOPS, 40.74 MiB/s [2024-11-05T16:03:15.678Z] 10692.57 IOPS, 41.77 MiB/s [2024-11-05T16:03:16.619Z] 10891.75 IOPS, 42.55 MiB/s [2024-11-05T16:03:17.639Z] 11043.11 IOPS, 43.14 MiB/s [2024-11-05T16:03:17.639Z] 11165.10 IOPS, 43.61 MiB/s 00:39:10.576 Latency(us) 00:39:10.576 [2024-11-05T16:03:17.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.576 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:10.576 Verification LBA range: start 0x0 length 0x4000 00:39:10.576 NVMe0n1 : 10.11 11144.77 43.53 0.00 0.00 91189.97 24685.23 75584.85 00:39:10.576 [2024-11-05T16:03:17.639Z] =================================================================================================================== 00:39:10.576 [2024-11-05T16:03:17.639Z] Total : 11144.77 43.53 0.00 0.00 91189.97 24685.23 75584.85 00:39:10.576 { 00:39:10.576 "results": [ 00:39:10.576 { 00:39:10.576 "job": "NVMe0n1", 00:39:10.576 "core_mask": "0x1", 00:39:10.576 "workload": "verify", 00:39:10.576 "status": "finished", 00:39:10.576 "verify_range": { 00:39:10.576 "start": 0, 00:39:10.576 "length": 16384 00:39:10.576 }, 00:39:10.576 "queue_depth": 1024, 00:39:10.576 "io_size": 4096, 00:39:10.576 "runtime": 10.110031, 00:39:10.576 "iops": 11144.772948767417, 00:39:10.576 "mibps": 43.53426933112272, 00:39:10.576 "io_failed": 0, 00:39:10.576 "io_timeout": 0, 00:39:10.576 "avg_latency_us": 91189.97231470142, 00:39:10.576 "min_latency_us": 24685.226666666666, 00:39:10.576 "max_latency_us": 75584.85333333333 00:39:10.576 } 00:39:10.576 ], 00:39:10.576 "core_count": 1 00:39:10.576 } 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3425929 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3425929 ']' 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3425929 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3425929 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3425929' 00:39:10.576 killing process with pid 3425929 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3425929 00:39:10.576 Received shutdown signal, test time was about 10.000000 seconds 00:39:10.576 00:39:10.576 Latency(us) 00:39:10.576 [2024-11-05T16:03:17.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.576 [2024-11-05T16:03:17.639Z] =================================================================================================================== 00:39:10.576 [2024-11-05T16:03:17.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3425929 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:10.576 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:39:10.577 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:10.577 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:39:10.577 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:10.577 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:10.577 rmmod nvme_tcp 00:39:10.577 rmmod nvme_fabrics 00:39:10.936 rmmod nvme_keyring 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 3425639 ']' 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 3425639 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3425639 ']' 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3425639 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3425639 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3425639' 00:39:10.936 killing process with pid 3425639 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3425639 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3425639 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:10.936 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:12.850 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:39:13.111 00:39:13.111 real 0m22.640s 00:39:13.111 user 0m24.973s 00:39:13.111 sys 0m7.458s 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:13.111 ************************************ 00:39:13.111 END TEST nvmf_queue_depth 00:39:13.111 ************************************ 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:13.111 ************************************ 00:39:13.111 START TEST nvmf_target_multipath 00:39:13.111 ************************************ 00:39:13.111 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:13.111 * Looking for test storage... 00:39:13.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:13.111 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:13.372 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.373 --rc genhtml_branch_coverage=1 00:39:13.373 --rc genhtml_function_coverage=1 00:39:13.373 --rc genhtml_legend=1 00:39:13.373 --rc geninfo_all_blocks=1 00:39:13.373 --rc geninfo_unexecuted_blocks=1 00:39:13.373 00:39:13.373 ' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.373 --rc genhtml_branch_coverage=1 00:39:13.373 --rc genhtml_function_coverage=1 00:39:13.373 --rc genhtml_legend=1 00:39:13.373 --rc geninfo_all_blocks=1 00:39:13.373 --rc geninfo_unexecuted_blocks=1 00:39:13.373 00:39:13.373 ' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.373 --rc genhtml_branch_coverage=1 00:39:13.373 --rc genhtml_function_coverage=1 00:39:13.373 --rc genhtml_legend=1 00:39:13.373 --rc geninfo_all_blocks=1 00:39:13.373 --rc geninfo_unexecuted_blocks=1 00:39:13.373 00:39:13.373 ' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.373 --rc genhtml_branch_coverage=1 00:39:13.373 --rc genhtml_function_coverage=1 00:39:13.373 --rc genhtml_legend=1 00:39:13.373 --rc geninfo_all_blocks=1 00:39:13.373 --rc geninfo_unexecuted_blocks=1 00:39:13.373 00:39:13.373 ' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:13.373 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:13.374 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:13.374 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:13.374 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:13.374 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:39:13.374 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:21.512 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:21.512 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:21.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:21.512 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:21.513 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:21.513 10.0.0.1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:21.513 10.0.0.2 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:21.513 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:21.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:21.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.599 ms 00:39:21.514 00:39:21.514 --- 10.0.0.1 ping statistics --- 00:39:21.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.514 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:21.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:21.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:39:21.514 00:39:21.514 --- 10.0.0.2 ping statistics --- 00:39:21.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.514 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:21.514 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:21.515 ' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:21.515 only one NIC for nvmf test 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:21.515 rmmod nvme_tcp 00:39:21.515 rmmod nvme_fabrics 00:39:21.515 rmmod nvme_keyring 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:21.515 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:39:22.900 00:39:22.900 real 0m9.798s 00:39:22.900 user 0m2.236s 00:39:22.900 sys 0m5.504s 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:22.900 ************************************ 00:39:22.900 END TEST nvmf_target_multipath 00:39:22.900 ************************************ 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:22.900 ************************************ 00:39:22.900 START TEST nvmf_zcopy 00:39:22.900 ************************************ 00:39:22.900 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:22.900 * Looking for test storage... 00:39:23.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:23.162 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:23.162 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:39:23.162 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:23.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.162 --rc genhtml_branch_coverage=1 00:39:23.162 --rc genhtml_function_coverage=1 00:39:23.162 --rc genhtml_legend=1 00:39:23.162 --rc geninfo_all_blocks=1 00:39:23.162 --rc geninfo_unexecuted_blocks=1 00:39:23.162 00:39:23.162 ' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:23.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.162 --rc genhtml_branch_coverage=1 00:39:23.162 --rc genhtml_function_coverage=1 00:39:23.162 --rc genhtml_legend=1 00:39:23.162 --rc geninfo_all_blocks=1 00:39:23.162 --rc geninfo_unexecuted_blocks=1 00:39:23.162 00:39:23.162 ' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:23.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.162 --rc genhtml_branch_coverage=1 00:39:23.162 --rc genhtml_function_coverage=1 00:39:23.162 --rc genhtml_legend=1 00:39:23.162 --rc geninfo_all_blocks=1 00:39:23.162 --rc geninfo_unexecuted_blocks=1 00:39:23.162 00:39:23.162 ' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:23.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.162 --rc genhtml_branch_coverage=1 00:39:23.162 --rc genhtml_function_coverage=1 00:39:23.162 --rc genhtml_legend=1 00:39:23.162 --rc geninfo_all_blocks=1 00:39:23.162 --rc geninfo_unexecuted_blocks=1 00:39:23.162 00:39:23.162 ' 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:23.162 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:39:23.163 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:31.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:31.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:31.303 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:31.303 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:31.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:31.304 10.0.0.1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:31.304 10.0.0.2 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:31.304 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:31.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:31.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.655 ms 00:39:31.305 00:39:31.305 --- 10.0.0.1 ping statistics --- 00:39:31.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.305 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:31.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:31.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:39:31.305 00:39:31.305 --- 10.0.0.2 ping statistics --- 00:39:31.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.305 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:31.305 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:31.306 ' 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=3436804 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 3436804 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3436804 ']' 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:31.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:31.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.306 [2024-11-05 17:03:37.659729] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:31.306 [2024-11-05 17:03:37.660895] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:39:31.306 [2024-11-05 17:03:37.660955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:31.306 [2024-11-05 17:03:37.762980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.306 [2024-11-05 17:03:37.815268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:31.306 [2024-11-05 17:03:37.815323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:31.306 [2024-11-05 17:03:37.815332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:31.306 [2024-11-05 17:03:37.815339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:31.306 [2024-11-05 17:03:37.815345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:31.306 [2024-11-05 17:03:37.816100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:31.306 [2024-11-05 17:03:37.892245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:31.306 [2024-11-05 17:03:37.892541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 [2024-11-05 17:03:38.516958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 [2024-11-05 17:03:38.545248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 malloc0 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:31.567 { 00:39:31.567 "params": { 00:39:31.567 "name": "Nvme$subsystem", 00:39:31.567 "trtype": "$TEST_TRANSPORT", 00:39:31.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:31.567 "adrfam": "ipv4", 00:39:31.567 "trsvcid": "$NVMF_PORT", 00:39:31.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:31.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:31.567 "hdgst": ${hdgst:-false}, 00:39:31.567 "ddgst": ${ddgst:-false} 00:39:31.567 }, 00:39:31.567 "method": "bdev_nvme_attach_controller" 00:39:31.567 } 00:39:31.567 EOF 00:39:31.567 )") 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:39:31.567 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:31.567 "params": { 00:39:31.567 "name": "Nvme1", 00:39:31.567 "trtype": "tcp", 00:39:31.567 "traddr": "10.0.0.2", 00:39:31.567 "adrfam": "ipv4", 00:39:31.567 "trsvcid": "4420", 00:39:31.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:31.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:31.567 "hdgst": false, 00:39:31.567 "ddgst": false 00:39:31.567 }, 00:39:31.567 "method": "bdev_nvme_attach_controller" 00:39:31.567 }' 00:39:31.827 [2024-11-05 17:03:38.647941] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:39:31.827 [2024-11-05 17:03:38.648020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437106 ] 00:39:31.827 [2024-11-05 17:03:38.725283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.828 [2024-11-05 17:03:38.766828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.087 Running I/O for 10 seconds... 00:39:34.411 6609.00 IOPS, 51.63 MiB/s [2024-11-05T16:03:42.044Z] 6669.00 IOPS, 52.10 MiB/s [2024-11-05T16:03:43.427Z] 6681.67 IOPS, 52.20 MiB/s [2024-11-05T16:03:44.366Z] 6687.25 IOPS, 52.24 MiB/s [2024-11-05T16:03:45.308Z] 6739.20 IOPS, 52.65 MiB/s [2024-11-05T16:03:46.247Z] 7234.50 IOPS, 56.52 MiB/s [2024-11-05T16:03:47.188Z] 7587.57 IOPS, 59.28 MiB/s [2024-11-05T16:03:48.129Z] 7849.12 IOPS, 61.32 MiB/s [2024-11-05T16:03:49.069Z] 8056.67 IOPS, 62.94 MiB/s [2024-11-05T16:03:49.069Z] 8221.60 IOPS, 64.23 MiB/s 00:39:42.006 Latency(us) 00:39:42.006 [2024-11-05T16:03:49.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:42.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:42.006 Verification LBA range: start 0x0 length 0x1000 00:39:42.006 Nvme1n1 : 10.01 8223.52 64.25 0.00 0.00 15512.06 1617.92 26323.63 00:39:42.006 [2024-11-05T16:03:49.069Z] =================================================================================================================== 00:39:42.006 [2024-11-05T16:03:49.069Z] Total : 8223.52 64.25 0.00 0.00 15512.06 1617.92 26323.63 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3439107 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:42.267 { 00:39:42.267 "params": { 00:39:42.267 "name": "Nvme$subsystem", 00:39:42.267 "trtype": "$TEST_TRANSPORT", 00:39:42.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:42.267 "adrfam": "ipv4", 00:39:42.267 "trsvcid": "$NVMF_PORT", 00:39:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:42.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:42.267 "hdgst": ${hdgst:-false}, 00:39:42.267 "ddgst": ${ddgst:-false} 00:39:42.267 }, 00:39:42.267 "method": "bdev_nvme_attach_controller" 00:39:42.267 } 00:39:42.267 EOF 00:39:42.267 )") 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:39:42.267 [2024-11-05 17:03:49.188499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.188528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:39:42.267 17:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:42.267 "params": { 00:39:42.267 "name": "Nvme1", 00:39:42.267 "trtype": "tcp", 00:39:42.267 "traddr": "10.0.0.2", 00:39:42.267 "adrfam": "ipv4", 00:39:42.267 "trsvcid": "4420", 00:39:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:42.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:42.267 "hdgst": false, 00:39:42.267 "ddgst": false 00:39:42.267 }, 00:39:42.267 "method": "bdev_nvme_attach_controller" 00:39:42.267 }' 00:39:42.267 [2024-11-05 17:03:49.200468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.200476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.212467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.212474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.224466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.224474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.228377] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:39:42.267 [2024-11-05 17:03:49.228424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439107 ] 00:39:42.267 [2024-11-05 17:03:49.236466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.236474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.248467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.248474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.260467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.260473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.272466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.272473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.284466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.284474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.296466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.296474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.297946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:42.267 [2024-11-05 17:03:49.308467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.308476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.267 [2024-11-05 17:03:49.320467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.267 [2024-11-05 17:03:49.320475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.332467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.332477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.332952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.527 [2024-11-05 17:03:49.344471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.344481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.356473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.356485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.368472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.368484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.380468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.380477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.392467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.392475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.404475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.404487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.416470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.416480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.428468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.428477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.440467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.440476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.452473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.527 [2024-11-05 17:03:49.452486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.527 [2024-11-05 17:03:49.464467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.464474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.476466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.476473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.488467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.488476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.500466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.500473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.512466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.512472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.524466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.524472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.536467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.536475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.548466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.548473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.560466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.560473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.572466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.572474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.528 [2024-11-05 17:03:49.584470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.528 [2024-11-05 17:03:49.584484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 Running I/O for 5 seconds... 00:39:42.788 [2024-11-05 17:03:49.601827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.601844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.615853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.615870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.629041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.629057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.643691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.643706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.656844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.656859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.672062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.672077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.685151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.685165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.699527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.699542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.712844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.712859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.727546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.727561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.740813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.740827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.755657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.755673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.768620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.768635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.781307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.781321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.795741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.795762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.808481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.808496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.821970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.821984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.836274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.836289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.788 [2024-11-05 17:03:49.849162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.788 [2024-11-05 17:03:49.849176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.863766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.863781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.876766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.876780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.891996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.892011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.905011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.905026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.919465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.919479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.932701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.932716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.945429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.945447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.960097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.960113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.973368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.973385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:49.988072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:49.988088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.001280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.001295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.015420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.015437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.028089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.028104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.040733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.040752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.056338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.056353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.069791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.069807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.048 [2024-11-05 17:03:50.083754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.048 [2024-11-05 17:03:50.083769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.049 [2024-11-05 17:03:50.096501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.049 [2024-11-05 17:03:50.096516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.049 [2024-11-05 17:03:50.109339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.049 [2024-11-05 17:03:50.109354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.123844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.123859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.136881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.136895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.151828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.151843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.164895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.164909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.179717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.179731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.192582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.192596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.205790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.205810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.220218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.220234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.233744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.233766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.247838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.247854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.260717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.260732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.273600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.273614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.287475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.287490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.300302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.300317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.313645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.313659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.327584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.327599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.340890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.340905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.355495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.355510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.309 [2024-11-05 17:03:50.368461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.309 [2024-11-05 17:03:50.368476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.381640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.381655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.395313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.395328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.408193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.408208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.421607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.421622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.435767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.435782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.448821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.448836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.463435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.463454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.476662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.476677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.489722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.489738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.504074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.504090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.517211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.517225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.531781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.531796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.544995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.545009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.559447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.559462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.572478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.572493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.585362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.585376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 18990.00 IOPS, 148.36 MiB/s [2024-11-05T16:03:50.633Z] [2024-11-05 17:03:50.599678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.599693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.612581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.612596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.570 [2024-11-05 17:03:50.625516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.570 [2024-11-05 17:03:50.625530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.639504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.639519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.652722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.652737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.665551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.665567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.679863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.679878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.692558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.692572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.705535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.705549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.720608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.720629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.733782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.733797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.747459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.747474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.760629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.760645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.773462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.773476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.787666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.787680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.830 [2024-11-05 17:03:50.800847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.830 [2024-11-05 17:03:50.800860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.831 [2024-11-05 17:03:50.815679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.831 [2024-11-05 17:03:50.815694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.831 [2024-11-05 17:03:50.828852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.831 [2024-11-05 17:03:50.828867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.831 [2024-11-05 17:03:50.843579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.831 [2024-11-05 17:03:50.843594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.831 [2024-11-05 17:03:50.856826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.831 [2024-11-05 17:03:50.856840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.831 [2024-11-05 17:03:50.871524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.831 [2024-11-05 17:03:50.871539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.831 [2024-11-05 17:03:50.884613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.831 [2024-11-05 17:03:50.884627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.897507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.897521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.911935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.911949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.925092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.925107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.938756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.938770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.951693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.951707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.964489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.964503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.977103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.977117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:50.991985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:50.992000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.005164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.005178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.018945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.018960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.032052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.032067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.044979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.044994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.059774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.059789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.072903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.072917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.087598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.087613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.100634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.100648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.113863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.113877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.127562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.127576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.140641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.140656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.091 [2024-11-05 17:03:51.153178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.091 [2024-11-05 17:03:51.153192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.351 [2024-11-05 17:03:51.167382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.351 [2024-11-05 17:03:51.167397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.180532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.180546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.193364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.193378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.208104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.208119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.220994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.221007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.235481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.235495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.248310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.248324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.261539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.261555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.275531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.275546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.288526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.288541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.301546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.301560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.315235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.315250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.328503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.328518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.341046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.341061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.355618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.355633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.368831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.368845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.384116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.384131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.397085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.397099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.352 [2024-11-05 17:03:51.411495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.352 [2024-11-05 17:03:51.411509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.424552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.424566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.437301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.437315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.451503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.451518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.464522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.464537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.477875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.477890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.491718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.491732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.504632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.504646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.517374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.517390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.531546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.531561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.544841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.544855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.559482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.559497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.572296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.572313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.585550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.585564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.599485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.599500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 19085.00 IOPS, 149.10 MiB/s [2024-11-05T16:03:51.675Z] [2024-11-05 17:03:51.612352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.612367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.625804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.625818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.639718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.639732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.652767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.652781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.612 [2024-11-05 17:03:51.667521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.612 [2024-11-05 17:03:51.667536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.680063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.680078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.692647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.692661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.707448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.707463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.720486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.720501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.733179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.733198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.747331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.747346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.760283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.760298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.773246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.773262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.787456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.787472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.800395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.800410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.812902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.812916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.827366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.827382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.840448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.840463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.852568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.852583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.865506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.865521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.879806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.879822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.892611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.892625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.907476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.907491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.920838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.920853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.873 [2024-11-05 17:03:51.935639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.873 [2024-11-05 17:03:51.935653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:51.948759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:51.948773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:51.963285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:51.963300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:51.976182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:51.976196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:51.988791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:51.988810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.003244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.003258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.016155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.016170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.028773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.028790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.043630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.043646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.056933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.056948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.071240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.071255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.084234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.084249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.096583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.096598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.109137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.109151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.123475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.123490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.136463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.136478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.149216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.149230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.163514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.163530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.176662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.176676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.134 [2024-11-05 17:03:52.191300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.134 [2024-11-05 17:03:52.191316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.204172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.204187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.216945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.216959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.231150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.231165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.244474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.244493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.257756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.257771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.271950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.271965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.284628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.284643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.299629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.299643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.312634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.312648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.327367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.327381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.340270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.340285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.353488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.353503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.366938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.366954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.379547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.379562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.392460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.392475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.405163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.405177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.419077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.419091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.431818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.431833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.444328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.444343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.396 [2024-11-05 17:03:52.457161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.396 [2024-11-05 17:03:52.457175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.471411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.471426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.484337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.484351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.497277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.497291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.511525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.511539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.524566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.524581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.536649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.536664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.551334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.551350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.564250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.564265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.576706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.576720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.591293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.591307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 19087.33 IOPS, 149.12 MiB/s [2024-11-05T16:03:52.719Z] [2024-11-05 17:03:52.604750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.656 [2024-11-05 17:03:52.604764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.656 [2024-11-05 17:03:52.619397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.619412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.632482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.632496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.644901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.644915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.659891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.659906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.672849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.672863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.687307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.687322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.700573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.700588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.657 [2024-11-05 17:03:52.713532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.657 [2024-11-05 17:03:52.713547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.727870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.727884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.740907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.740921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.755704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.755718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.768829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.768843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.783559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.783573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.796458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.796473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.809239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.809253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.823508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.823523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.836461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.836476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.849228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.849242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.863897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.863911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.877110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.877125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.892010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.892025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.904913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.904927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.920153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.920168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.933562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.933577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.947919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.947934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.960992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.961006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.917 [2024-11-05 17:03:52.975646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.917 [2024-11-05 17:03:52.975660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:52.988736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:52.988753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.003640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.003654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.016516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.016530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.030057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.030071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.043839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.043853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.056691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.056706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.069408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.069422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.083423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.083437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.096530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.096545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.109246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.109259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.123954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.123969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.136694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.136708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.149515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.149529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.163727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.163742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.176490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.176504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.189859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.189873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.203463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.203478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.216341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.216356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.178 [2024-11-05 17:03:53.229653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.178 [2024-11-05 17:03:53.229669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.243725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.243740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.256655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.256674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.269323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.269338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.283738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.283756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.296931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.296945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.311507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.311522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.323985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.324000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.337550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.337565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.351775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.351790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.364803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.364817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.379660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.379675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.392473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.392488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.404850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.404865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.419608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.419623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.432648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.432663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.445906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.445920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.459973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.459987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.473235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.473249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.487333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.487347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.439 [2024-11-05 17:03:53.500156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.439 [2024-11-05 17:03:53.500171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.512925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.512943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.528053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.528068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.541160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.541175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.556079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.556094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.569013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.569029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.583824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.583839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.596914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.596929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 19092.50 IOPS, 149.16 MiB/s [2024-11-05T16:03:53.763Z] [2024-11-05 17:03:53.611372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.611387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.624209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.624224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.637573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.637587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.652023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.652038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.664931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.664946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.679793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.679808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.692645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.692660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.705666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.705681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.719528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.719543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.732637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.732651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.745629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.745644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.700 [2024-11-05 17:03:53.759351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.700 [2024-11-05 17:03:53.759365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.772777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.772795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.788184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.788199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.801229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.801243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.816248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.816263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.829470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.829485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.843625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.843640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.856335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.856349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.869143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.869157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.883636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.883650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.896438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.896453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.909410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.909424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.923509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.923523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.936525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.936540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.949215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.949229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.963447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.963461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.976514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.976529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:53.989840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:53.989854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:54.003607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:54.003622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:46.961 [2024-11-05 17:03:54.016691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:46.961 [2024-11-05 17:03:54.016706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.029566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.029581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.043563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.043578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.056212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.056227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.068947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.068962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.083430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.083445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.096397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.096412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.109394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.109409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.123716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.123730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.137022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.137036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.151410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.151425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.164353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.164368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.177047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.177061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.191376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.191390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.204497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.204512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.217657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.217672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.232017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.232032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.245035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.245048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.259554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.259569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.223 [2024-11-05 17:03:54.272780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.223 [2024-11-05 17:03:54.272793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.287766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.287780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.300439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.300453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.313359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.313373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.327672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.327686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.340943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.340957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.355581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.355596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.368755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.368769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.383286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.383300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.396405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.396421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.409307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.409322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.423800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.423814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.436678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.436693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.449671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.449688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.464132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.464147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.477514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.477528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.491618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.491632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.504494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.504509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.516934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.516948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.531862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.531876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.484 [2024-11-05 17:03:54.544670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.484 [2024-11-05 17:03:54.544684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.557395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.557410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.571460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.571474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.583929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.583943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.597052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.597067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 19090.40 IOPS, 149.14 MiB/s [2024-11-05T16:03:54.809Z] [2024-11-05 17:03:54.610758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.610773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 00:39:47.746 Latency(us) 00:39:47.746 [2024-11-05T16:03:54.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:47.746 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:47.746 Nvme1n1 : 5.01 19092.50 149.16 0.00 0.00 6697.32 2717.01 12397.23 00:39:47.746 [2024-11-05T16:03:54.809Z] =================================================================================================================== 00:39:47.746 [2024-11-05T16:03:54.809Z] Total : 19092.50 149.16 0.00 0.00 6697.32 2717.01 12397.23 00:39:47.746 [2024-11-05 17:03:54.620470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.620482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.632475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.632487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.644474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.644486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.656475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.656488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.668472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.668482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.680467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.680476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.692467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.692474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.704469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.704479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.716468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.716476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 [2024-11-05 17:03:54.728467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:47.746 [2024-11-05 17:03:54.728478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:47.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3439107) - No such process 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3439107 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:47.746 delay0 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.746 17:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:48.007 [2024-11-05 17:03:54.877147] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:54.586 Initializing NVMe Controllers 00:39:54.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:54.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:54.586 Initialization complete. Launching workers. 00:39:54.586 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 294, failed: 9541 00:39:54.586 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9771, failed to submit 64 00:39:54.586 success 9659, unsuccessful 112, failed 0 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:54.586 rmmod nvme_tcp 00:39:54.586 rmmod nvme_fabrics 00:39:54.586 rmmod nvme_keyring 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 3436804 ']' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 3436804 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3436804 ']' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3436804 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3436804 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3436804' 00:39:54.586 killing process with pid 3436804 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3436804 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3436804 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:54.586 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:39:57.130 00:39:57.130 real 0m33.861s 00:39:57.130 user 0m43.520s 00:39:57.130 sys 0m11.850s 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:57.130 ************************************ 00:39:57.130 END TEST nvmf_zcopy 00:39:57.130 ************************************ 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:57.130 ************************************ 00:39:57.130 START TEST nvmf_nmic 00:39:57.130 ************************************ 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:57.130 * Looking for test storage... 00:39:57.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:57.130 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:57.131 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:57.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.131 --rc genhtml_branch_coverage=1 00:39:57.131 --rc genhtml_function_coverage=1 00:39:57.131 --rc genhtml_legend=1 00:39:57.131 --rc geninfo_all_blocks=1 00:39:57.131 --rc geninfo_unexecuted_blocks=1 00:39:57.131 00:39:57.131 ' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:57.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.131 --rc genhtml_branch_coverage=1 00:39:57.131 --rc genhtml_function_coverage=1 00:39:57.131 --rc genhtml_legend=1 00:39:57.131 --rc geninfo_all_blocks=1 00:39:57.131 --rc geninfo_unexecuted_blocks=1 00:39:57.131 00:39:57.131 ' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:57.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.131 --rc genhtml_branch_coverage=1 00:39:57.131 --rc genhtml_function_coverage=1 00:39:57.131 --rc genhtml_legend=1 00:39:57.131 --rc geninfo_all_blocks=1 00:39:57.131 --rc geninfo_unexecuted_blocks=1 00:39:57.131 00:39:57.131 ' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:57.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.131 --rc genhtml_branch_coverage=1 00:39:57.131 --rc genhtml_function_coverage=1 00:39:57.131 --rc genhtml_legend=1 00:39:57.131 --rc geninfo_all_blocks=1 00:39:57.131 --rc geninfo_unexecuted_blocks=1 00:39:57.131 00:39:57.131 ' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:39:57.131 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:05.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:05.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:05.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:05.271 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:05.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:05.272 10.0.0.1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:05.272 10.0.0.2 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:05.272 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:05.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.540 ms 00:40:05.273 00:40:05.273 --- 10.0.0.1 ping statistics --- 00:40:05.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.273 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:05.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:40:05.273 00:40:05.273 --- 10.0.0.2 ping statistics --- 00:40:05.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.273 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.273 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:05.274 ' 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=3445558 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 3445558 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3445558 ']' 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:05.274 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.274 [2024-11-05 17:04:11.658204] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:05.274 [2024-11-05 17:04:11.659179] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:40:05.274 [2024-11-05 17:04:11.659218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.274 [2024-11-05 17:04:11.734977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:05.274 [2024-11-05 17:04:11.772444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.274 [2024-11-05 17:04:11.772474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.274 [2024-11-05 17:04:11.772482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.274 [2024-11-05 17:04:11.772489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.274 [2024-11-05 17:04:11.772495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.274 [2024-11-05 17:04:11.774013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.274 [2024-11-05 17:04:11.774134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:05.274 [2024-11-05 17:04:11.774295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.274 [2024-11-05 17:04:11.774296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:05.274 [2024-11-05 17:04:11.828813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:05.274 [2024-11-05 17:04:11.828978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:05.274 [2024-11-05 17:04:11.830081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:05.274 [2024-11-05 17:04:11.830997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:05.274 [2024-11-05 17:04:11.831075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 [2024-11-05 17:04:12.490782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 Malloc0 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 [2024-11-05 17:04:12.566900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:05.535 test case1: single bdev can't be used in multiple subsystems 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.535 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.795 [2024-11-05 17:04:12.602651] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:05.796 [2024-11-05 17:04:12.602672] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:05.796 [2024-11-05 17:04:12.602680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:05.796 request: 00:40:05.796 { 00:40:05.796 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:05.796 "namespace": { 00:40:05.796 "bdev_name": "Malloc0", 00:40:05.796 "no_auto_visible": false 00:40:05.796 }, 00:40:05.796 "method": "nvmf_subsystem_add_ns", 00:40:05.796 "req_id": 1 00:40:05.796 } 00:40:05.796 Got JSON-RPC error response 00:40:05.796 response: 00:40:05.796 { 00:40:05.796 "code": -32602, 00:40:05.796 "message": "Invalid parameters" 00:40:05.796 } 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:05.796 Adding namespace failed - expected result. 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:05.796 test case2: host connect to nvmf target in multiple paths 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.796 [2024-11-05 17:04:12.614763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.796 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:06.056 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:06.627 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:06.628 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:40:06.628 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:40:06.628 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:40:06.628 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:40:08.543 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:08.543 [global] 00:40:08.543 thread=1 00:40:08.543 invalidate=1 00:40:08.543 rw=write 00:40:08.543 time_based=1 00:40:08.543 runtime=1 00:40:08.543 ioengine=libaio 00:40:08.543 direct=1 00:40:08.543 bs=4096 00:40:08.543 iodepth=1 00:40:08.543 norandommap=0 00:40:08.543 numjobs=1 00:40:08.543 00:40:08.543 verify_dump=1 00:40:08.543 verify_backlog=512 00:40:08.543 verify_state_save=0 00:40:08.543 do_verify=1 00:40:08.543 verify=crc32c-intel 00:40:08.543 [job0] 00:40:08.543 filename=/dev/nvme0n1 00:40:08.543 Could not set queue depth (nvme0n1) 00:40:08.803 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:08.803 fio-3.35 00:40:08.803 Starting 1 thread 00:40:10.190 00:40:10.190 job0: (groupid=0, jobs=1): err= 0: pid=3446664: Tue Nov 5 17:04:16 2024 00:40:10.190 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:10.190 slat (nsec): min=24734, max=55011, avg=25789.02, stdev=2991.05 00:40:10.190 clat (usec): min=753, max=1269, avg=1065.90, stdev=101.62 00:40:10.190 lat (usec): min=779, max=1294, avg=1091.69, stdev=101.57 00:40:10.190 clat percentiles (usec): 00:40:10.190 | 1.00th=[ 791], 5.00th=[ 881], 10.00th=[ 938], 20.00th=[ 971], 00:40:10.190 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1090], 60.00th=[ 1123], 00:40:10.190 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1188], 00:40:10.190 | 99.00th=[ 1237], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1270], 00:40:10.190 | 99.99th=[ 1270] 00:40:10.190 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:40:10.190 slat (nsec): min=9486, max=67686, avg=27978.67, stdev=9894.00 00:40:10.190 clat (usec): min=234, max=853, avg=609.61, stdev=105.22 00:40:10.190 lat (usec): min=244, max=886, avg=637.59, stdev=109.81 00:40:10.190 clat percentiles (usec): 00:40:10.190 | 1.00th=[ 343], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 519], 00:40:10.190 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:40:10.190 | 70.00th=[ 685], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 750], 00:40:10.190 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 857], 99.95th=[ 857], 00:40:10.190 | 99.99th=[ 857] 00:40:10.190 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:10.190 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:10.190 lat (usec) : 250=0.08%, 500=9.16%, 750=45.04%, 1000=14.96% 00:40:10.190 lat (msec) : 2=30.76% 00:40:10.190 cpu : usr=1.40%, sys=3.60%, ctx=1190, majf=0, minf=1 00:40:10.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.190 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.190 00:40:10.190 Run status group 0 (all jobs): 00:40:10.190 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:40:10.190 WRITE: bw=2709KiB/s (2774kB/s), 2709KiB/s-2709KiB/s (2774kB/s-2774kB/s), io=2712KiB (2777kB), run=1001-1001msec 00:40:10.190 00:40:10.190 Disk stats (read/write): 00:40:10.190 nvme0n1: ios=562/523, merge=0/0, ticks=914/310, in_queue=1224, util=97.70% 00:40:10.190 17:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:10.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:10.190 rmmod nvme_tcp 00:40:10.190 rmmod nvme_fabrics 00:40:10.190 rmmod nvme_keyring 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 3445558 ']' 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 3445558 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3445558 ']' 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3445558 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3445558 00:40:10.190 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:10.191 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:10.191 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3445558' 00:40:10.191 killing process with pid 3445558 00:40:10.191 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3445558 00:40:10.191 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3445558 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:10.452 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:40:12.458 00:40:12.458 real 0m15.638s 00:40:12.458 user 0m35.657s 00:40:12.458 sys 0m7.377s 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.458 ************************************ 00:40:12.458 END TEST nvmf_nmic 00:40:12.458 ************************************ 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:12.458 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:12.720 ************************************ 00:40:12.720 START TEST nvmf_fio_target 00:40:12.720 ************************************ 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:12.720 * Looking for test storage... 00:40:12.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:12.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.720 --rc genhtml_branch_coverage=1 00:40:12.720 --rc genhtml_function_coverage=1 00:40:12.720 --rc genhtml_legend=1 00:40:12.720 --rc geninfo_all_blocks=1 00:40:12.720 --rc geninfo_unexecuted_blocks=1 00:40:12.720 00:40:12.720 ' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:12.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.720 --rc genhtml_branch_coverage=1 00:40:12.720 --rc genhtml_function_coverage=1 00:40:12.720 --rc genhtml_legend=1 00:40:12.720 --rc geninfo_all_blocks=1 00:40:12.720 --rc geninfo_unexecuted_blocks=1 00:40:12.720 00:40:12.720 ' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:12.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.720 --rc genhtml_branch_coverage=1 00:40:12.720 --rc genhtml_function_coverage=1 00:40:12.720 --rc genhtml_legend=1 00:40:12.720 --rc geninfo_all_blocks=1 00:40:12.720 --rc geninfo_unexecuted_blocks=1 00:40:12.720 00:40:12.720 ' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:12.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.720 --rc genhtml_branch_coverage=1 00:40:12.720 --rc genhtml_function_coverage=1 00:40:12.720 --rc genhtml_legend=1 00:40:12.720 --rc geninfo_all_blocks=1 00:40:12.720 --rc geninfo_unexecuted_blocks=1 00:40:12.720 00:40:12.720 ' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:12.720 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:40:12.721 17:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:20.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:20.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:20.866 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:20.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:20.867 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:20.867 10.0.0.1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:20.867 10.0.0.2 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.867 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:20.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:20.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:40:20.868 00:40:20.868 --- 10.0.0.1 ping statistics --- 00:40:20.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:20.868 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:20.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:20.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:40:20.868 00:40:20.868 --- 10.0.0.2 ping statistics --- 00:40:20.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:20.868 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:40:20.868 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:20.869 ' 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=3451029 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 3451029 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3451029 ']' 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:20.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:20.869 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:20.869 [2024-11-05 17:04:26.960105] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:20.869 [2024-11-05 17:04:26.961266] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:40:20.869 [2024-11-05 17:04:26.961320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.869 [2024-11-05 17:04:27.043520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:20.869 [2024-11-05 17:04:27.085500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:20.869 [2024-11-05 17:04:27.085538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:20.869 [2024-11-05 17:04:27.085546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:20.869 [2024-11-05 17:04:27.085553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:20.869 [2024-11-05 17:04:27.085559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:20.869 [2024-11-05 17:04:27.087380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.869 [2024-11-05 17:04:27.087499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:20.869 [2024-11-05 17:04:27.087659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.869 [2024-11-05 17:04:27.087660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:20.869 [2024-11-05 17:04:27.143488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:20.869 [2024-11-05 17:04:27.143703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:20.869 [2024-11-05 17:04:27.144715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:20.869 [2024-11-05 17:04:27.145328] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:20.869 [2024-11-05 17:04:27.145436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:20.869 17:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:21.130 [2024-11-05 17:04:27.948126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:21.130 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:21.392 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:21.392 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:21.392 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:21.392 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:21.653 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:21.653 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:21.913 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:21.913 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:21.914 17:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.174 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:22.174 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.433 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:22.433 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:22.698 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:22.698 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:22.698 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:22.962 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:22.962 17:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:22.962 17:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:22.962 17:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:23.222 17:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:23.482 [2024-11-05 17:04:30.340275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:23.482 17:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:23.743 17:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:23.743 17:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:24.315 17:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:24.315 17:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:40:24.315 17:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:40:24.315 17:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:40:24.315 17:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:40:24.315 17:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:40:26.228 17:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:26.228 [global] 00:40:26.228 thread=1 00:40:26.228 invalidate=1 00:40:26.228 rw=write 00:40:26.228 time_based=1 00:40:26.228 runtime=1 00:40:26.228 ioengine=libaio 00:40:26.228 direct=1 00:40:26.228 bs=4096 00:40:26.228 iodepth=1 00:40:26.228 norandommap=0 00:40:26.228 numjobs=1 00:40:26.228 00:40:26.228 verify_dump=1 00:40:26.228 verify_backlog=512 00:40:26.228 verify_state_save=0 00:40:26.228 do_verify=1 00:40:26.228 verify=crc32c-intel 00:40:26.228 [job0] 00:40:26.228 filename=/dev/nvme0n1 00:40:26.228 [job1] 00:40:26.228 filename=/dev/nvme0n2 00:40:26.228 [job2] 00:40:26.228 filename=/dev/nvme0n3 00:40:26.228 [job3] 00:40:26.228 filename=/dev/nvme0n4 00:40:26.511 Could not set queue depth (nvme0n1) 00:40:26.511 Could not set queue depth (nvme0n2) 00:40:26.511 Could not set queue depth (nvme0n3) 00:40:26.511 Could not set queue depth (nvme0n4) 00:40:26.774 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:26.774 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:26.774 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:26.774 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:26.774 fio-3.35 00:40:26.774 Starting 4 threads 00:40:28.197 00:40:28.197 job0: (groupid=0, jobs=1): err= 0: pid=3452504: Tue Nov 5 17:04:34 2024 00:40:28.197 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1006msec) 00:40:28.197 slat (nsec): min=24791, max=25733, avg=25107.12, stdev=234.90 00:40:28.197 clat (usec): min=41121, max=42086, avg=41865.53, stdev=255.72 00:40:28.197 lat (usec): min=41146, max=42111, avg=41890.63, stdev=255.58 00:40:28.197 clat percentiles (usec): 00:40:28.197 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:40:28.197 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:40:28.197 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:28.197 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:28.197 | 99.99th=[42206] 00:40:28.197 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:40:28.197 slat (nsec): min=9764, max=67589, avg=30038.48, stdev=8220.06 00:40:28.197 clat (usec): min=208, max=1000, avg=618.14, stdev=119.77 00:40:28.197 lat (usec): min=230, max=1033, avg=648.18, stdev=121.92 00:40:28.197 clat percentiles (usec): 00:40:28.197 | 1.00th=[ 306], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 523], 00:40:28.197 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:40:28.197 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:40:28.197 | 99.00th=[ 848], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1004], 00:40:28.197 | 99.99th=[ 1004] 00:40:28.197 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:40:28.197 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:28.197 lat (usec) : 250=0.38%, 500=15.34%, 750=71.02%, 1000=10.04% 00:40:28.197 lat (msec) : 2=0.19%, 50=3.03% 00:40:28.197 cpu : usr=0.60%, sys=1.59%, ctx=528, majf=0, minf=1 00:40:28.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.197 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:28.197 job1: (groupid=0, jobs=1): err= 0: pid=3452525: Tue Nov 5 17:04:34 2024 00:40:28.197 read: IOPS=496, BW=1987KiB/s (2034kB/s)(2064KiB/1039msec) 00:40:28.197 slat (nsec): min=7075, max=61271, avg=23237.72, stdev=7854.53 00:40:28.197 clat (usec): min=434, max=42091, avg=1006.17, stdev=3133.18 00:40:28.197 lat (usec): min=460, max=42117, avg=1029.40, stdev=3133.46 00:40:28.197 clat percentiles (usec): 00:40:28.197 | 1.00th=[ 537], 5.00th=[ 578], 10.00th=[ 644], 20.00th=[ 701], 00:40:28.197 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 791], 00:40:28.197 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 914], 00:40:28.197 | 99.00th=[ 979], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:40:28.197 | 99.99th=[42206] 00:40:28.197 write: IOPS=985, BW=3942KiB/s (4037kB/s)(4096KiB/1039msec); 0 zone resets 00:40:28.197 slat (nsec): min=9166, max=69139, avg=28747.86, stdev=10023.23 00:40:28.197 clat (usec): min=125, max=774, avg=456.06, stdev=95.20 00:40:28.197 lat (usec): min=146, max=808, avg=484.81, stdev=97.43 00:40:28.197 clat percentiles (usec): 00:40:28.197 | 1.00th=[ 251], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 371], 00:40:28.197 | 30.00th=[ 408], 40.00th=[ 437], 50.00th=[ 461], 60.00th=[ 482], 00:40:28.197 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 611], 00:40:28.197 | 99.00th=[ 701], 99.50th=[ 742], 99.90th=[ 758], 99.95th=[ 775], 00:40:28.197 | 99.99th=[ 775] 00:40:28.197 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=2 00:40:28.197 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:40:28.197 lat (usec) : 250=0.65%, 500=45.52%, 750=32.34%, 1000=21.17% 00:40:28.197 lat (msec) : 2=0.13%, 50=0.19% 00:40:28.197 cpu : usr=2.31%, sys=3.95%, ctx=1540, majf=0, minf=1 00:40:28.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.197 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:28.197 job2: (groupid=0, jobs=1): err= 0: pid=3452541: Tue Nov 5 17:04:34 2024 00:40:28.197 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:28.197 slat (nsec): min=6871, max=46535, avg=27894.04, stdev=2623.21 00:40:28.197 clat (usec): min=642, max=1174, avg=959.12, stdev=67.75 00:40:28.197 lat (usec): min=670, max=1202, avg=987.01, stdev=68.08 00:40:28.197 clat percentiles (usec): 00:40:28.197 | 1.00th=[ 766], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 914], 00:40:28.197 | 30.00th=[ 938], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:40:28.197 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:40:28.197 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:40:28.197 | 99.99th=[ 1172] 00:40:28.197 write: IOPS=799, BW=3197KiB/s (3274kB/s)(3200KiB/1001msec); 0 zone resets 00:40:28.197 slat (nsec): min=9650, max=70264, avg=32407.39, stdev=10658.63 00:40:28.197 clat (usec): min=153, max=833, avg=569.12, stdev=117.70 00:40:28.197 lat (usec): min=190, max=869, avg=601.53, stdev=122.55 00:40:28.197 clat percentiles (usec): 00:40:28.198 | 1.00th=[ 281], 5.00th=[ 359], 10.00th=[ 396], 20.00th=[ 465], 00:40:28.198 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:40:28.198 | 70.00th=[ 644], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 742], 00:40:28.198 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 832], 99.95th=[ 832], 00:40:28.198 | 99.99th=[ 832] 00:40:28.198 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:40:28.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:28.198 lat (usec) : 250=0.15%, 500=16.08%, 750=42.15%, 1000=31.55% 00:40:28.198 lat (msec) : 2=10.06% 00:40:28.198 cpu : usr=2.20%, sys=5.80%, ctx=1314, majf=0, minf=1 00:40:28.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.198 issued rwts: total=512,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:28.198 job3: (groupid=0, jobs=1): err= 0: pid=3452548: Tue Nov 5 17:04:34 2024 00:40:28.198 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:40:28.198 slat (nsec): min=25506, max=26102, avg=25769.00, stdev=155.87 00:40:28.198 clat (usec): min=856, max=42094, avg=37407.93, stdev=12839.20 00:40:28.198 lat (usec): min=882, max=42120, avg=37433.69, stdev=12839.16 00:40:28.198 clat percentiles (usec): 00:40:28.198 | 1.00th=[ 857], 5.00th=[ 857], 10.00th=[ 1123], 20.00th=[41157], 00:40:28.198 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:40:28.198 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:28.198 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:28.198 | 99.99th=[42206] 00:40:28.198 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:40:28.198 slat (nsec): min=9451, max=67920, avg=31353.63, stdev=7845.65 00:40:28.198 clat (usec): min=256, max=953, avg=597.13, stdev=130.02 00:40:28.198 lat (usec): min=268, max=987, avg=628.48, stdev=132.00 00:40:28.198 clat percentiles (usec): 00:40:28.198 | 1.00th=[ 297], 5.00th=[ 367], 10.00th=[ 412], 20.00th=[ 494], 00:40:28.198 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:40:28.198 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 807], 00:40:28.198 | 99.00th=[ 889], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 955], 00:40:28.198 | 99.99th=[ 955] 00:40:28.198 bw ( KiB/s): min= 4096, max= 4096, per=37.36%, avg=4096.00, stdev= 0.00, samples=1 00:40:28.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:28.198 lat (usec) : 500=19.96%, 750=64.97%, 1000=11.68% 00:40:28.198 lat (msec) : 2=0.19%, 50=3.20% 00:40:28.198 cpu : usr=0.77%, sys=1.54%, ctx=532, majf=0, minf=1 00:40:28.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.198 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:28.198 00:40:28.198 Run status group 0 (all jobs): 00:40:28.198 READ: bw=4092KiB/s (4191kB/s), 63.6KiB/s-2046KiB/s (65.1kB/s-2095kB/s), io=4252KiB (4354kB), run=1001-1039msec 00:40:28.198 WRITE: bw=10.7MiB/s (11.2MB/s), 1975KiB/s-3942KiB/s (2022kB/s-4037kB/s), io=11.1MiB (11.7MB), run=1001-1039msec 00:40:28.198 00:40:28.198 Disk stats (read/write): 00:40:28.198 nvme0n1: ios=61/512, merge=0/0, ticks=507/304, in_queue=811, util=86.17% 00:40:28.198 nvme0n2: ios=544/874, merge=0/0, ticks=524/396, in_queue=920, util=95.91% 00:40:28.198 nvme0n3: ios=535/518, merge=0/0, ticks=1416/242, in_queue=1658, util=96.29% 00:40:28.198 nvme0n4: ios=41/512, merge=0/0, ticks=803/286, in_queue=1089, util=91.32% 00:40:28.198 17:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:28.198 [global] 00:40:28.198 thread=1 00:40:28.198 invalidate=1 00:40:28.198 rw=randwrite 00:40:28.198 time_based=1 00:40:28.198 runtime=1 00:40:28.198 ioengine=libaio 00:40:28.198 direct=1 00:40:28.198 bs=4096 00:40:28.198 iodepth=1 00:40:28.198 norandommap=0 00:40:28.198 numjobs=1 00:40:28.198 00:40:28.198 verify_dump=1 00:40:28.198 verify_backlog=512 00:40:28.198 verify_state_save=0 00:40:28.198 do_verify=1 00:40:28.198 verify=crc32c-intel 00:40:28.198 [job0] 00:40:28.198 filename=/dev/nvme0n1 00:40:28.198 [job1] 00:40:28.198 filename=/dev/nvme0n2 00:40:28.198 [job2] 00:40:28.198 filename=/dev/nvme0n3 00:40:28.198 [job3] 00:40:28.198 filename=/dev/nvme0n4 00:40:28.198 Could not set queue depth (nvme0n1) 00:40:28.198 Could not set queue depth (nvme0n2) 00:40:28.198 Could not set queue depth (nvme0n3) 00:40:28.198 Could not set queue depth (nvme0n4) 00:40:28.463 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.463 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.463 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.463 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:28.463 fio-3.35 00:40:28.463 Starting 4 threads 00:40:29.888 00:40:29.888 job0: (groupid=0, jobs=1): err= 0: pid=3452967: Tue Nov 5 17:04:36 2024 00:40:29.888 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:29.888 slat (nsec): min=25374, max=45241, avg=26293.43, stdev=1900.62 00:40:29.888 clat (usec): min=759, max=1263, avg=1034.88, stdev=83.47 00:40:29.888 lat (usec): min=786, max=1289, avg=1061.17, stdev=83.38 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 988], 00:40:29.888 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:40:29.888 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:40:29.888 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1270], 00:40:29.888 | 99.99th=[ 1270] 00:40:29.888 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:40:29.888 slat (nsec): min=9472, max=66312, avg=30645.46, stdev=9270.89 00:40:29.888 clat (usec): min=271, max=1041, avg=627.70, stdev=132.78 00:40:29.888 lat (usec): min=282, max=1074, avg=658.34, stdev=135.84 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[ 453], 20.00th=[ 515], 00:40:29.888 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:40:29.888 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 799], 95.00th=[ 848], 00:40:29.888 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 1045], 99.95th=[ 1045], 00:40:29.888 | 99.99th=[ 1045] 00:40:29.888 bw ( KiB/s): min= 4096, max= 4096, per=43.68%, avg=4096.00, stdev= 0.00, samples=1 00:40:29.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:29.888 lat (usec) : 500=10.59%, 750=36.72%, 1000=20.08% 00:40:29.888 lat (msec) : 2=32.61% 00:40:29.888 cpu : usr=2.10%, sys=3.30%, ctx=1192, majf=0, minf=1 00:40:29.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.888 job1: (groupid=0, jobs=1): err= 0: pid=3452983: Tue Nov 5 17:04:36 2024 00:40:29.888 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:40:29.888 slat (nsec): min=7333, max=25450, avg=23461.89, stdev=5200.19 00:40:29.888 clat (usec): min=554, max=41713, avg=38875.33, stdev=9281.55 00:40:29.888 lat (usec): min=564, max=41721, avg=38898.79, stdev=9284.69 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 553], 5.00th=[ 553], 10.00th=[40633], 20.00th=[41157], 00:40:29.888 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:29.888 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:29.888 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:29.888 | 99.99th=[41681] 00:40:29.888 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:40:29.888 slat (nsec): min=9325, max=75547, avg=29840.66, stdev=8443.00 00:40:29.888 clat (usec): min=141, max=963, avg=480.23, stdev=132.04 00:40:29.888 lat (usec): min=151, max=995, avg=510.08, stdev=134.10 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 235], 5.00th=[ 281], 10.00th=[ 318], 20.00th=[ 359], 00:40:29.888 | 30.00th=[ 388], 40.00th=[ 441], 50.00th=[ 474], 60.00th=[ 519], 00:40:29.888 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 701], 00:40:29.888 | 99.00th=[ 799], 99.50th=[ 799], 99.90th=[ 963], 99.95th=[ 963], 00:40:29.888 | 99.99th=[ 963] 00:40:29.888 bw ( KiB/s): min= 4096, max= 4096, per=43.68%, avg=4096.00, stdev= 0.00, samples=1 00:40:29.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:29.888 lat (usec) : 250=2.45%, 500=52.73%, 750=38.98%, 1000=2.45% 00:40:29.888 lat (msec) : 50=3.39% 00:40:29.888 cpu : usr=1.00%, sys=1.29%, ctx=531, majf=0, minf=1 00:40:29.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.888 job2: (groupid=0, jobs=1): err= 0: pid=3452990: Tue Nov 5 17:04:36 2024 00:40:29.888 read: IOPS=18, BW=75.0KiB/s (76.7kB/s)(76.0KiB/1014msec) 00:40:29.888 slat (nsec): min=27059, max=28101, avg=27616.79, stdev=242.06 00:40:29.888 clat (usec): min=40782, max=41243, avg=40977.36, stdev=112.05 00:40:29.888 lat (usec): min=40810, max=41270, avg=41004.97, stdev=111.95 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:40:29.888 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:29.888 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:29.888 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:29.888 | 99.99th=[41157] 00:40:29.888 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:40:29.888 slat (nsec): min=9401, max=54807, avg=27237.63, stdev=11435.49 00:40:29.888 clat (usec): min=120, max=930, avg=423.09, stdev=175.64 00:40:29.888 lat (usec): min=138, max=949, avg=450.33, stdev=178.24 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 133], 5.00th=[ 215], 10.00th=[ 245], 20.00th=[ 285], 00:40:29.888 | 30.00th=[ 310], 40.00th=[ 338], 50.00th=[ 359], 60.00th=[ 404], 00:40:29.888 | 70.00th=[ 482], 80.00th=[ 611], 90.00th=[ 709], 95.00th=[ 750], 00:40:29.888 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 930], 00:40:29.888 | 99.99th=[ 930] 00:40:29.888 bw ( KiB/s): min= 4096, max= 4096, per=43.68%, avg=4096.00, stdev= 0.00, samples=1 00:40:29.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:29.888 lat (usec) : 250=11.68%, 500=58.00%, 750=21.47%, 1000=5.27% 00:40:29.888 lat (msec) : 50=3.58% 00:40:29.888 cpu : usr=0.59%, sys=1.58%, ctx=532, majf=0, minf=1 00:40:29.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.888 job3: (groupid=0, jobs=1): err= 0: pid=3452996: Tue Nov 5 17:04:36 2024 00:40:29.888 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:29.888 slat (nsec): min=7795, max=61645, avg=26355.49, stdev=3511.28 00:40:29.888 clat (usec): min=776, max=1353, avg=1057.72, stdev=82.82 00:40:29.888 lat (usec): min=803, max=1379, avg=1084.08, stdev=82.92 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 816], 5.00th=[ 906], 10.00th=[ 955], 20.00th=[ 996], 00:40:29.888 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1074], 00:40:29.888 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:40:29.888 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1352], 99.95th=[ 1352], 00:40:29.888 | 99.99th=[ 1352] 00:40:29.888 write: IOPS=674, BW=2697KiB/s (2762kB/s)(2700KiB/1001msec); 0 zone resets 00:40:29.888 slat (nsec): min=9524, max=50846, avg=28813.16, stdev=8849.69 00:40:29.888 clat (usec): min=231, max=963, avg=616.36, stdev=123.30 00:40:29.888 lat (usec): min=241, max=996, avg=645.17, stdev=126.79 00:40:29.888 clat percentiles (usec): 00:40:29.888 | 1.00th=[ 318], 5.00th=[ 383], 10.00th=[ 457], 20.00th=[ 515], 00:40:29.888 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:40:29.888 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:40:29.888 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:40:29.888 | 99.99th=[ 963] 00:40:29.888 bw ( KiB/s): min= 4096, max= 4096, per=43.68%, avg=4096.00, stdev= 0.00, samples=1 00:40:29.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:29.888 lat (usec) : 250=0.17%, 500=10.11%, 750=39.85%, 1000=15.59% 00:40:29.888 lat (msec) : 2=34.29% 00:40:29.888 cpu : usr=1.30%, sys=4.00%, ctx=1187, majf=0, minf=1 00:40:29.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:29.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.888 issued rwts: total=512,675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:29.888 00:40:29.888 Run status group 0 (all jobs): 00:40:29.888 READ: bw=4189KiB/s (4290kB/s), 75.0KiB/s-2046KiB/s (76.7kB/s-2095kB/s), io=4248KiB (4350kB), run=1001-1014msec 00:40:29.888 WRITE: bw=9377KiB/s (9602kB/s), 2020KiB/s-2709KiB/s (2068kB/s-2774kB/s), io=9508KiB (9736kB), run=1001-1014msec 00:40:29.888 00:40:29.888 Disk stats (read/write): 00:40:29.888 nvme0n1: ios=486/512, merge=0/0, ticks=733/309, in_queue=1042, util=97.09% 00:40:29.889 nvme0n2: ios=48/512, merge=0/0, ticks=587/229, in_queue=816, util=87.45% 00:40:29.889 nvme0n3: ios=36/512, merge=0/0, ticks=1493/215, in_queue=1708, util=96.41% 00:40:29.889 nvme0n4: ios=482/512, merge=0/0, ticks=764/311, in_queue=1075, util=90.59% 00:40:29.889 17:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:29.889 [global] 00:40:29.889 thread=1 00:40:29.889 invalidate=1 00:40:29.889 rw=write 00:40:29.889 time_based=1 00:40:29.889 runtime=1 00:40:29.889 ioengine=libaio 00:40:29.889 direct=1 00:40:29.889 bs=4096 00:40:29.889 iodepth=128 00:40:29.889 norandommap=0 00:40:29.889 numjobs=1 00:40:29.889 00:40:29.889 verify_dump=1 00:40:29.889 verify_backlog=512 00:40:29.889 verify_state_save=0 00:40:29.889 do_verify=1 00:40:29.889 verify=crc32c-intel 00:40:29.889 [job0] 00:40:29.889 filename=/dev/nvme0n1 00:40:29.889 [job1] 00:40:29.889 filename=/dev/nvme0n2 00:40:29.889 [job2] 00:40:29.889 filename=/dev/nvme0n3 00:40:29.889 [job3] 00:40:29.889 filename=/dev/nvme0n4 00:40:29.889 Could not set queue depth (nvme0n1) 00:40:29.889 Could not set queue depth (nvme0n2) 00:40:29.889 Could not set queue depth (nvme0n3) 00:40:29.889 Could not set queue depth (nvme0n4) 00:40:30.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.149 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.149 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.149 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:30.149 fio-3.35 00:40:30.149 Starting 4 threads 00:40:31.544 00:40:31.544 job0: (groupid=0, jobs=1): err= 0: pid=3453416: Tue Nov 5 17:04:38 2024 00:40:31.544 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:40:31.544 slat (nsec): min=923, max=11671k, avg=64336.19, stdev=537584.76 00:40:31.544 clat (usec): min=1597, max=26557, avg=9350.11, stdev=3939.59 00:40:31.544 lat (usec): min=1604, max=26565, avg=9414.45, stdev=3972.09 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 3261], 5.00th=[ 4948], 10.00th=[ 5800], 20.00th=[ 6652], 00:40:31.544 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 9372], 00:40:31.544 | 70.00th=[10028], 80.00th=[11600], 90.00th=[13173], 95.00th=[17695], 00:40:31.544 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:40:31.544 | 99.99th=[26608] 00:40:31.544 write: IOPS=6441, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1004msec); 0 zone resets 00:40:31.544 slat (nsec): min=1590, max=11551k, avg=75170.16, stdev=531298.24 00:40:31.544 clat (usec): min=376, max=50256, avg=10823.35, stdev=6638.05 00:40:31.544 lat (usec): min=386, max=50265, avg=10898.52, stdev=6672.27 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 1663], 5.00th=[ 3818], 10.00th=[ 5014], 20.00th=[ 5997], 00:40:31.544 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 8717], 60.00th=[10159], 00:40:31.544 | 70.00th=[11469], 80.00th=[15270], 90.00th=[19530], 95.00th=[25822], 00:40:31.544 | 99.00th=[32637], 99.50th=[34866], 99.90th=[43254], 99.95th=[45351], 00:40:31.544 | 99.99th=[50070] 00:40:31.544 bw ( KiB/s): min=25328, max=25392, per=28.94%, avg=25360.00, stdev=45.25, samples=2 00:40:31.544 iops : min= 6332, max= 6348, avg=6340.00, stdev=11.31, samples=2 00:40:31.544 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.06% 00:40:31.544 lat (msec) : 2=0.67%, 4=3.28%, 10=60.02%, 20=29.43%, 50=6.49% 00:40:31.544 lat (msec) : 100=0.01% 00:40:31.544 cpu : usr=4.39%, sys=7.58%, ctx=428, majf=0, minf=1 00:40:31.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:31.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:31.544 issued rwts: total=6144,6467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:31.544 job1: (groupid=0, jobs=1): err= 0: pid=3453435: Tue Nov 5 17:04:38 2024 00:40:31.544 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:40:31.544 slat (nsec): min=915, max=14045k, avg=65058.02, stdev=564259.66 00:40:31.544 clat (usec): min=1995, max=28394, avg=9454.74, stdev=4344.81 00:40:31.544 lat (usec): min=2001, max=31100, avg=9519.80, stdev=4386.59 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 2769], 5.00th=[ 4015], 10.00th=[ 5145], 20.00th=[ 6652], 00:40:31.544 | 30.00th=[ 7177], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:40:31.544 | 70.00th=[10028], 80.00th=[12125], 90.00th=[14615], 95.00th=[18482], 00:40:31.544 | 99.00th=[25560], 99.50th=[26870], 99.90th=[27919], 99.95th=[27919], 00:40:31.544 | 99.99th=[28443] 00:40:31.544 write: IOPS=6342, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1010msec); 0 zone resets 00:40:31.544 slat (nsec): min=1579, max=14774k, avg=74235.37, stdev=531065.62 00:40:31.544 clat (usec): min=682, max=53285, avg=10937.41, stdev=8943.25 00:40:31.544 lat (usec): min=715, max=53294, avg=11011.65, stdev=9002.47 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 1254], 5.00th=[ 2474], 10.00th=[ 4047], 20.00th=[ 5473], 00:40:31.544 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7767], 60.00th=[ 9110], 00:40:31.544 | 70.00th=[10945], 80.00th=[14222], 90.00th=[22414], 95.00th=[33162], 00:40:31.544 | 99.00th=[43779], 99.50th=[46924], 99.90th=[48497], 99.95th=[53216], 00:40:31.544 | 99.99th=[53216] 00:40:31.544 bw ( KiB/s): min=18536, max=31696, per=28.66%, avg=25116.00, stdev=9305.53, samples=2 00:40:31.544 iops : min= 4634, max= 7924, avg=6279.00, stdev=2326.38, samples=2 00:40:31.544 lat (usec) : 750=0.06%, 1000=0.01% 00:40:31.544 lat (msec) : 2=1.67%, 4=5.70%, 10=58.66%, 20=26.29%, 50=7.56% 00:40:31.544 lat (msec) : 100=0.05% 00:40:31.544 cpu : usr=5.05%, sys=6.54%, ctx=509, majf=0, minf=1 00:40:31.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:31.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:31.544 issued rwts: total=6144,6406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:31.544 job2: (groupid=0, jobs=1): err= 0: pid=3453453: Tue Nov 5 17:04:38 2024 00:40:31.544 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:40:31.544 slat (nsec): min=918, max=18452k, avg=153817.59, stdev=1078718.92 00:40:31.544 clat (usec): min=6327, max=60986, avg=20662.03, stdev=11272.06 00:40:31.544 lat (usec): min=6333, max=72557, avg=20815.85, stdev=11376.50 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 6915], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[11207], 00:40:31.544 | 30.00th=[13304], 40.00th=[14615], 50.00th=[17433], 60.00th=[20055], 00:40:31.544 | 70.00th=[24773], 80.00th=[29492], 90.00th=[36963], 95.00th=[45351], 00:40:31.544 | 99.00th=[52691], 99.50th=[55313], 99.90th=[61080], 99.95th=[61080], 00:40:31.544 | 99.99th=[61080] 00:40:31.544 write: IOPS=2981, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1007msec); 0 zone resets 00:40:31.544 slat (nsec): min=1640, max=20734k, avg=194163.59, stdev=1158274.32 00:40:31.544 clat (usec): min=1333, max=81505, avg=24919.04, stdev=17275.13 00:40:31.544 lat (usec): min=1345, max=81532, avg=25113.21, stdev=17405.68 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 3982], 5.00th=[ 7308], 10.00th=[ 9896], 20.00th=[10814], 00:40:31.544 | 30.00th=[11863], 40.00th=[13042], 50.00th=[16188], 60.00th=[23462], 00:40:31.544 | 70.00th=[35914], 80.00th=[41157], 90.00th=[52167], 95.00th=[57410], 00:40:31.544 | 99.00th=[70779], 99.50th=[76022], 99.90th=[81265], 99.95th=[81265], 00:40:31.544 | 99.99th=[81265] 00:40:31.544 bw ( KiB/s): min= 8952, max=14048, per=13.12%, avg=11500.00, stdev=3603.42, samples=2 00:40:31.544 iops : min= 2238, max= 3512, avg=2875.00, stdev=900.85, samples=2 00:40:31.544 lat (msec) : 2=0.18%, 4=0.72%, 10=13.52%, 20=43.55%, 50=34.75% 00:40:31.544 lat (msec) : 100=7.28% 00:40:31.544 cpu : usr=2.49%, sys=2.88%, ctx=280, majf=0, minf=1 00:40:31.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:31.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:31.544 issued rwts: total=2560,3002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:31.544 job3: (groupid=0, jobs=1): err= 0: pid=3453460: Tue Nov 5 17:04:38 2024 00:40:31.544 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:40:31.544 slat (nsec): min=919, max=9896.9k, avg=69805.89, stdev=551687.65 00:40:31.544 clat (usec): min=1455, max=25113, avg=10463.16, stdev=3316.63 00:40:31.544 lat (usec): min=1469, max=25117, avg=10532.97, stdev=3340.08 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 2507], 5.00th=[ 5800], 10.00th=[ 7242], 20.00th=[ 8160], 00:40:31.544 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10683], 00:40:31.544 | 70.00th=[11076], 80.00th=[12780], 90.00th=[15139], 95.00th=[17171], 00:40:31.544 | 99.00th=[19530], 99.50th=[22152], 99.90th=[22152], 99.95th=[25035], 00:40:31.544 | 99.99th=[25035] 00:40:31.544 write: IOPS=6230, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1003msec); 0 zone resets 00:40:31.544 slat (nsec): min=1545, max=18885k, avg=70025.73, stdev=526876.48 00:40:31.544 clat (usec): min=713, max=39033, avg=10092.52, stdev=5165.35 00:40:31.544 lat (usec): min=831, max=40229, avg=10162.55, stdev=5189.47 00:40:31.544 clat percentiles (usec): 00:40:31.544 | 1.00th=[ 2057], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6783], 00:40:31.544 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:40:31.544 | 70.00th=[10814], 80.00th=[11600], 90.00th=[15795], 95.00th=[22676], 00:40:31.544 | 99.00th=[30540], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:40:31.544 | 99.99th=[39060] 00:40:31.544 bw ( KiB/s): min=23472, max=25728, per=28.08%, avg=24600.00, stdev=1595.23, samples=2 00:40:31.544 iops : min= 5868, max= 6432, avg=6150.00, stdev=398.81, samples=2 00:40:31.544 lat (usec) : 750=0.01%, 1000=0.04% 00:40:31.544 lat (msec) : 2=0.44%, 4=2.49%, 10=55.14%, 20=38.44%, 50=3.44% 00:40:31.544 cpu : usr=4.39%, sys=7.49%, ctx=563, majf=0, minf=1 00:40:31.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:31.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:31.544 issued rwts: total=6144,6249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:31.544 00:40:31.544 Run status group 0 (all jobs): 00:40:31.544 READ: bw=81.2MiB/s (85.1MB/s), 9.93MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=82.0MiB (86.0MB), run=1003-1010msec 00:40:31.544 WRITE: bw=85.6MiB/s (89.7MB/s), 11.6MiB/s-25.2MiB/s (12.2MB/s-26.4MB/s), io=86.4MiB (90.6MB), run=1003-1010msec 00:40:31.544 00:40:31.544 Disk stats (read/write): 00:40:31.544 nvme0n1: ios=5035/5120, merge=0/0, ticks=47308/52627, in_queue=99935, util=87.58% 00:40:31.545 nvme0n2: ios=5663/5719, merge=0/0, ticks=49397/49631, in_queue=99028, util=90.72% 00:40:31.545 nvme0n3: ios=2174/2560, merge=0/0, ticks=15567/24086, in_queue=39653, util=88.38% 00:40:31.545 nvme0n4: ios=4924/5120, merge=0/0, ticks=38288/37815, in_queue=76103, util=89.09% 00:40:31.545 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:31.545 [global] 00:40:31.545 thread=1 00:40:31.545 invalidate=1 00:40:31.545 rw=randwrite 00:40:31.545 time_based=1 00:40:31.545 runtime=1 00:40:31.545 ioengine=libaio 00:40:31.545 direct=1 00:40:31.545 bs=4096 00:40:31.545 iodepth=128 00:40:31.545 norandommap=0 00:40:31.545 numjobs=1 00:40:31.545 00:40:31.545 verify_dump=1 00:40:31.545 verify_backlog=512 00:40:31.545 verify_state_save=0 00:40:31.545 do_verify=1 00:40:31.545 verify=crc32c-intel 00:40:31.545 [job0] 00:40:31.545 filename=/dev/nvme0n1 00:40:31.545 [job1] 00:40:31.545 filename=/dev/nvme0n2 00:40:31.545 [job2] 00:40:31.545 filename=/dev/nvme0n3 00:40:31.545 [job3] 00:40:31.545 filename=/dev/nvme0n4 00:40:31.545 Could not set queue depth (nvme0n1) 00:40:31.545 Could not set queue depth (nvme0n2) 00:40:31.545 Could not set queue depth (nvme0n3) 00:40:31.545 Could not set queue depth (nvme0n4) 00:40:31.803 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:31.803 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:31.803 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:31.803 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:31.803 fio-3.35 00:40:31.803 Starting 4 threads 00:40:33.188 00:40:33.188 job0: (groupid=0, jobs=1): err= 0: pid=3453876: Tue Nov 5 17:04:39 2024 00:40:33.188 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:40:33.188 slat (nsec): min=914, max=16833k, avg=106581.04, stdev=742600.11 00:40:33.188 clat (usec): min=3567, max=69059, avg=12884.83, stdev=8420.48 00:40:33.188 lat (usec): min=3585, max=69071, avg=12991.41, stdev=8493.15 00:40:33.188 clat percentiles (usec): 00:40:33.188 | 1.00th=[ 5800], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 7767], 00:40:33.188 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[10159], 60.00th=[11469], 00:40:33.188 | 70.00th=[13173], 80.00th=[16909], 90.00th=[21627], 95.00th=[26346], 00:40:33.188 | 99.00th=[53740], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:40:33.188 | 99.99th=[68682] 00:40:33.188 write: IOPS=4722, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1011msec); 0 zone resets 00:40:33.188 slat (nsec): min=1535, max=16544k, avg=92016.77, stdev=643375.81 00:40:33.188 clat (usec): min=1124, max=69007, avg=14448.52, stdev=11675.30 00:40:33.188 lat (usec): min=1133, max=69009, avg=14540.54, stdev=11732.00 00:40:33.188 clat percentiles (usec): 00:40:33.188 | 1.00th=[ 3851], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 7046], 00:40:33.188 | 30.00th=[ 7373], 40.00th=[ 8979], 50.00th=[11469], 60.00th=[13698], 00:40:33.188 | 70.00th=[14091], 80.00th=[18482], 90.00th=[25560], 95.00th=[38011], 00:40:33.188 | 99.00th=[63701], 99.50th=[64750], 99.90th=[66323], 99.95th=[66323], 00:40:33.188 | 99.99th=[68682] 00:40:33.188 bw ( KiB/s): min=16696, max=20480, per=21.29%, avg=18588.00, stdev=2675.69, samples=2 00:40:33.188 iops : min= 4174, max= 5120, avg=4647.00, stdev=668.92, samples=2 00:40:33.188 lat (msec) : 2=0.02%, 4=0.66%, 10=45.62%, 20=39.53%, 50=11.90% 00:40:33.188 lat (msec) : 100=2.27% 00:40:33.188 cpu : usr=3.76%, sys=4.95%, ctx=357, majf=0, minf=1 00:40:33.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:33.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.188 issued rwts: total=4608,4774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.188 job1: (groupid=0, jobs=1): err= 0: pid=3453885: Tue Nov 5 17:04:39 2024 00:40:33.188 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:40:33.188 slat (nsec): min=952, max=14112k, avg=92238.46, stdev=727779.09 00:40:33.188 clat (usec): min=3925, max=28266, avg=11749.42, stdev=4568.31 00:40:33.188 lat (usec): min=3934, max=28276, avg=11841.66, stdev=4618.55 00:40:33.188 clat percentiles (usec): 00:40:33.189 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 7635], 00:40:33.189 | 30.00th=[ 8291], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[12518], 00:40:33.189 | 70.00th=[13304], 80.00th=[15664], 90.00th=[19268], 95.00th=[20317], 00:40:33.189 | 99.00th=[23987], 99.50th=[23987], 99.90th=[28181], 99.95th=[28181], 00:40:33.189 | 99.99th=[28181] 00:40:33.189 write: IOPS=3819, BW=14.9MiB/s (15.6MB/s)(15.1MiB/1011msec); 0 zone resets 00:40:33.189 slat (nsec): min=1621, max=11226k, avg=168531.53, stdev=859443.48 00:40:33.189 clat (usec): min=1125, max=91102, avg=22308.21, stdev=20427.90 00:40:33.189 lat (usec): min=1136, max=91112, avg=22476.74, stdev=20549.66 00:40:33.189 clat percentiles (usec): 00:40:33.189 | 1.00th=[ 3818], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 7504], 00:40:33.189 | 30.00th=[ 9634], 40.00th=[12125], 50.00th=[13829], 60.00th=[14091], 00:40:33.189 | 70.00th=[21365], 80.00th=[36439], 90.00th=[60031], 95.00th=[64750], 00:40:33.189 | 99.00th=[81265], 99.50th=[84411], 99.90th=[90702], 99.95th=[90702], 00:40:33.189 | 99.99th=[90702] 00:40:33.189 bw ( KiB/s): min=12808, max=17072, per=17.11%, avg=14940.00, stdev=3015.10, samples=2 00:40:33.189 iops : min= 3202, max= 4268, avg=3735.00, stdev=753.78, samples=2 00:40:33.189 lat (msec) : 2=0.03%, 4=0.67%, 10=36.07%, 20=44.12%, 50=11.20% 00:40:33.189 lat (msec) : 100=7.91% 00:40:33.189 cpu : usr=2.97%, sys=3.56%, ctx=365, majf=0, minf=1 00:40:33.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:33.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.189 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.189 job2: (groupid=0, jobs=1): err= 0: pid=3453892: Tue Nov 5 17:04:39 2024 00:40:33.189 read: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1005msec) 00:40:33.189 slat (nsec): min=980, max=10604k, avg=87476.46, stdev=735907.97 00:40:33.189 clat (usec): min=3564, max=26317, avg=11071.76, stdev=2904.91 00:40:33.189 lat (usec): min=3569, max=26321, avg=11159.24, stdev=2962.20 00:40:33.189 clat percentiles (usec): 00:40:33.189 | 1.00th=[ 5669], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8717], 00:40:33.189 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[10814], 00:40:33.189 | 70.00th=[11469], 80.00th=[12649], 90.00th=[15401], 95.00th=[17171], 00:40:33.189 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20579], 99.95th=[20841], 00:40:33.189 | 99.99th=[26346] 00:40:33.189 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:40:33.189 slat (nsec): min=1626, max=9367.2k, avg=71587.87, stdev=552907.08 00:40:33.189 clat (usec): min=1604, max=20642, avg=9885.69, stdev=2647.94 00:40:33.189 lat (usec): min=1615, max=20645, avg=9957.27, stdev=2671.01 00:40:33.189 clat percentiles (usec): 00:40:33.189 | 1.00th=[ 4080], 5.00th=[ 5604], 10.00th=[ 6652], 20.00th=[ 7439], 00:40:33.189 | 30.00th=[ 8291], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10552], 00:40:33.189 | 70.00th=[11207], 80.00th=[11338], 90.00th=[13566], 95.00th=[14615], 00:40:33.189 | 99.00th=[16581], 99.50th=[17695], 99.90th=[20055], 99.95th=[20317], 00:40:33.189 | 99.99th=[20579] 00:40:33.189 bw ( KiB/s): min=24576, max=24576, per=28.15%, avg=24576.00, stdev= 0.00, samples=2 00:40:33.189 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:40:33.189 lat (msec) : 2=0.07%, 4=0.47%, 10=44.12%, 20=54.79%, 50=0.55% 00:40:33.189 cpu : usr=4.28%, sys=6.18%, ctx=376, majf=0, minf=2 00:40:33.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:33.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.189 issued rwts: total=6028,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.189 job3: (groupid=0, jobs=1): err= 0: pid=3453898: Tue Nov 5 17:04:39 2024 00:40:33.189 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:40:33.189 slat (nsec): min=1401, max=15788k, avg=73641.61, stdev=644398.25 00:40:33.189 clat (usec): min=2633, max=29942, avg=9887.89, stdev=4170.69 00:40:33.189 lat (usec): min=2638, max=29952, avg=9961.54, stdev=4215.80 00:40:33.189 clat percentiles (usec): 00:40:33.189 | 1.00th=[ 3458], 5.00th=[ 5473], 10.00th=[ 6587], 20.00th=[ 7111], 00:40:33.189 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[ 9503], 00:40:33.189 | 70.00th=[10683], 80.00th=[12518], 90.00th=[14746], 95.00th=[17695], 00:40:33.189 | 99.00th=[26084], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:40:33.189 | 99.99th=[30016] 00:40:33.189 write: IOPS=7263, BW=28.4MiB/s (29.8MB/s)(28.5MiB/1003msec); 0 zone resets 00:40:33.189 slat (nsec): min=1659, max=8503.0k, avg=53488.65, stdev=453046.19 00:40:33.189 clat (usec): min=827, max=22009, avg=7671.71, stdev=2656.48 00:40:33.189 lat (usec): min=855, max=22033, avg=7725.20, stdev=2668.16 00:40:33.189 clat percentiles (usec): 00:40:33.189 | 1.00th=[ 1647], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5473], 00:40:33.189 | 30.00th=[ 6521], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:40:33.189 | 70.00th=[ 8225], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11863], 00:40:33.189 | 99.00th=[13829], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:40:33.189 | 99.99th=[21890] 00:40:33.189 bw ( KiB/s): min=23616, max=33784, per=32.88%, avg=28700.00, stdev=7189.86, samples=2 00:40:33.189 iops : min= 5904, max= 8446, avg=7175.00, stdev=1797.47, samples=2 00:40:33.189 lat (usec) : 1000=0.02% 00:40:33.189 lat (msec) : 2=0.71%, 4=2.82%, 10=69.72%, 20=24.69%, 50=2.04% 00:40:33.189 cpu : usr=5.39%, sys=8.18%, ctx=309, majf=0, minf=1 00:40:33.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:33.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:33.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:33.189 issued rwts: total=7168,7285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:33.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:33.189 00:40:33.189 Run status group 0 (all jobs): 00:40:33.189 READ: bw=82.6MiB/s (86.7MB/s), 13.8MiB/s-27.9MiB/s (14.5MB/s-29.3MB/s), io=83.5MiB (87.6MB), run=1003-1011msec 00:40:33.189 WRITE: bw=85.3MiB/s (89.4MB/s), 14.9MiB/s-28.4MiB/s (15.6MB/s-29.8MB/s), io=86.2MiB (90.4MB), run=1003-1011msec 00:40:33.189 00:40:33.189 Disk stats (read/write): 00:40:33.189 nvme0n1: ios=4146/4327, merge=0/0, ticks=45510/56105, in_queue=101615, util=93.59% 00:40:33.189 nvme0n2: ios=3099/3395, merge=0/0, ticks=33823/68755, in_queue=102578, util=86.33% 00:40:33.189 nvme0n3: ios=4724/5120, merge=0/0, ticks=51950/50255, in_queue=102205, util=88.48% 00:40:33.189 nvme0n4: ios=6090/6144, merge=0/0, ticks=55904/43905, in_queue=99809, util=91.66% 00:40:33.189 17:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:33.189 17:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3454198 00:40:33.189 17:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:33.190 17:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:33.190 [global] 00:40:33.190 thread=1 00:40:33.190 invalidate=1 00:40:33.190 rw=read 00:40:33.190 time_based=1 00:40:33.190 runtime=10 00:40:33.190 ioengine=libaio 00:40:33.190 direct=1 00:40:33.190 bs=4096 00:40:33.190 iodepth=1 00:40:33.190 norandommap=1 00:40:33.190 numjobs=1 00:40:33.190 00:40:33.190 [job0] 00:40:33.190 filename=/dev/nvme0n1 00:40:33.190 [job1] 00:40:33.190 filename=/dev/nvme0n2 00:40:33.190 [job2] 00:40:33.190 filename=/dev/nvme0n3 00:40:33.190 [job3] 00:40:33.190 filename=/dev/nvme0n4 00:40:33.190 Could not set queue depth (nvme0n1) 00:40:33.190 Could not set queue depth (nvme0n2) 00:40:33.190 Could not set queue depth (nvme0n3) 00:40:33.190 Could not set queue depth (nvme0n4) 00:40:33.450 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.450 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.450 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.450 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.450 fio-3.35 00:40:33.450 Starting 4 threads 00:40:35.998 17:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:36.258 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:36.258 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:40:36.258 fio: pid=3454392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:36.258 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:36.258 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:36.258 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=376832, buflen=4096 00:40:36.259 fio: pid=3454390, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:36.520 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:40:36.520 fio: pid=3454385, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:36.520 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:36.520 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:36.779 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:36.779 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:36.779 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5406720, buflen=4096 00:40:36.779 fio: pid=3454386, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:36.779 00:40:36.779 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3454385: Tue Nov 5 17:04:43 2024 00:40:36.779 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(284KiB/2961msec) 00:40:36.779 slat (usec): min=24, max=214, avg=28.08, stdev=22.40 00:40:36.779 clat (usec): min=1001, max=42091, avg=41350.70, stdev=4860.46 00:40:36.779 lat (usec): min=1040, max=42116, avg=41378.82, stdev=4859.13 00:40:36.780 clat percentiles (usec): 00:40:36.780 | 1.00th=[ 1004], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:40:36.780 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:36.780 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:36.780 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:36.780 | 99.99th=[42206] 00:40:36.780 bw ( KiB/s): min= 96, max= 96, per=4.93%, avg=96.00, stdev= 0.00, samples=5 00:40:36.780 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:40:36.780 lat (msec) : 2=1.39%, 50=97.22% 00:40:36.780 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=1 00:40:36.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:36.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:36.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:36.780 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3454386: Tue Nov 5 17:04:43 2024 00:40:36.780 read: IOPS=415, BW=1662KiB/s (1702kB/s)(5280KiB/3177msec) 00:40:36.780 slat (usec): min=7, max=15585, avg=41.83, stdev=468.40 00:40:36.780 clat (usec): min=707, max=42082, avg=2341.56, stdev=7347.90 00:40:36.780 lat (usec): min=731, max=42107, avg=2383.40, stdev=7360.68 00:40:36.780 clat percentiles (usec): 00:40:36.780 | 1.00th=[ 799], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 947], 00:40:36.780 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:40:36.780 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1106], 00:40:36.780 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:36.780 | 99.99th=[42206] 00:40:36.780 bw ( KiB/s): min= 96, max= 3992, per=89.55%, avg=1743.67, stdev=1830.75, samples=6 00:40:36.780 iops : min= 24, max= 998, avg=435.83, stdev=457.76, samples=6 00:40:36.780 lat (usec) : 750=0.08%, 1000=65.71% 00:40:36.780 lat (msec) : 2=30.81%, 50=3.33% 00:40:36.780 cpu : usr=0.22%, sys=1.45%, ctx=1324, majf=0, minf=2 00:40:36.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:36.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 issued rwts: total=1321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:36.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:36.780 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3454390: Tue Nov 5 17:04:43 2024 00:40:36.780 read: IOPS=33, BW=132KiB/s (135kB/s)(368KiB/2792msec) 00:40:36.780 slat (nsec): min=6621, max=34642, avg=25702.03, stdev=5310.06 00:40:36.780 clat (usec): min=723, max=42074, avg=30078.92, stdev=18418.53 00:40:36.780 lat (usec): min=730, max=42100, avg=30104.60, stdev=18420.34 00:40:36.780 clat percentiles (usec): 00:40:36.780 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 971], 00:40:36.780 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:40:36.780 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:36.780 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:36.780 | 99.99th=[42206] 00:40:36.780 bw ( KiB/s): min= 96, max= 296, per=6.99%, avg=136.00, stdev=89.44, samples=5 00:40:36.780 iops : min= 24, max= 74, avg=34.00, stdev=22.36, samples=5 00:40:36.780 lat (usec) : 750=3.23%, 1000=21.51% 00:40:36.780 lat (msec) : 2=3.23%, 50=70.97% 00:40:36.780 cpu : usr=0.00%, sys=0.18%, ctx=93, majf=0, minf=2 00:40:36.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:36.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:36.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:36.780 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3454392: Tue Nov 5 17:04:43 2024 00:40:36.780 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(252KiB/2622msec) 00:40:36.780 slat (nsec): min=26191, max=34863, avg=26756.88, stdev=1219.00 00:40:36.780 clat (usec): min=928, max=42118, avg=41240.82, stdev=5166.98 00:40:36.780 lat (usec): min=963, max=42144, avg=41267.58, stdev=5165.94 00:40:36.780 clat percentiles (usec): 00:40:36.780 | 1.00th=[ 930], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:40:36.780 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:40:36.780 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:36.780 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:36.780 | 99.99th=[42206] 00:40:36.780 bw ( KiB/s): min= 96, max= 96, per=4.93%, avg=96.00, stdev= 0.00, samples=5 00:40:36.780 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:40:36.780 lat (usec) : 1000=1.56% 00:40:36.780 lat (msec) : 50=96.88% 00:40:36.780 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:40:36.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:36.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.780 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:36.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:36.780 00:40:36.780 Run status group 0 (all jobs): 00:40:36.780 READ: bw=1946KiB/s (1993kB/s), 95.9KiB/s-1662KiB/s (98.2kB/s-1702kB/s), io=6184KiB (6332kB), run=2622-3177msec 00:40:36.780 00:40:36.780 Disk stats (read/write): 00:40:36.780 nvme0n1: ios=68/0, merge=0/0, ticks=2812/0, in_queue=2812, util=94.76% 00:40:36.780 nvme0n2: ios=1318/0, merge=0/0, ticks=3015/0, in_queue=3015, util=95.01% 00:40:36.780 nvme0n3: ios=87/0, merge=0/0, ticks=2562/0, in_queue=2562, util=95.99% 00:40:36.780 nvme0n4: ios=62/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.42% 00:40:36.780 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:36.780 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:37.040 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.040 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:37.300 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.301 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:37.301 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:37.301 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3454198 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:37.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:40:37.562 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:37.823 nvmf hotplug test: fio failed as expected 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:37.823 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:37.823 rmmod nvme_tcp 00:40:37.823 rmmod nvme_fabrics 00:40:37.823 rmmod nvme_keyring 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 3451029 ']' 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 3451029 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3451029 ']' 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3451029 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3451029 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3451029' 00:40:38.085 killing process with pid 3451029 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3451029 00:40:38.085 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3451029 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:38.085 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:40.633 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:40:40.634 00:40:40.634 real 0m27.635s 00:40:40.634 user 2m14.453s 00:40:40.634 sys 0m11.461s 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:40.634 ************************************ 00:40:40.634 END TEST nvmf_fio_target 00:40:40.634 ************************************ 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:40.634 ************************************ 00:40:40.634 START TEST nvmf_bdevio 00:40:40.634 ************************************ 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:40.634 * Looking for test storage... 00:40:40.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.634 --rc genhtml_branch_coverage=1 00:40:40.634 --rc genhtml_function_coverage=1 00:40:40.634 --rc genhtml_legend=1 00:40:40.634 --rc geninfo_all_blocks=1 00:40:40.634 --rc geninfo_unexecuted_blocks=1 00:40:40.634 00:40:40.634 ' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.634 --rc genhtml_branch_coverage=1 00:40:40.634 --rc genhtml_function_coverage=1 00:40:40.634 --rc genhtml_legend=1 00:40:40.634 --rc geninfo_all_blocks=1 00:40:40.634 --rc geninfo_unexecuted_blocks=1 00:40:40.634 00:40:40.634 ' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.634 --rc genhtml_branch_coverage=1 00:40:40.634 --rc genhtml_function_coverage=1 00:40:40.634 --rc genhtml_legend=1 00:40:40.634 --rc geninfo_all_blocks=1 00:40:40.634 --rc geninfo_unexecuted_blocks=1 00:40:40.634 00:40:40.634 ' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:40.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.634 --rc genhtml_branch_coverage=1 00:40:40.634 --rc genhtml_function_coverage=1 00:40:40.634 --rc genhtml_legend=1 00:40:40.634 --rc geninfo_all_blocks=1 00:40:40.634 --rc geninfo_unexecuted_blocks=1 00:40:40.634 00:40:40.634 ' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.634 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:40:40.635 17:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:48.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:48.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:48.784 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:48.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:48.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:48.785 10.0.0.1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:48.785 10.0.0.2 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:48.785 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:48.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.639 ms 00:40:48.786 00:40:48.786 --- 10.0.0.1 ping statistics --- 00:40:48.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.786 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:48.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:40:48.786 00:40:48.786 --- 10.0.0.2 ping statistics --- 00:40:48.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.786 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:48.786 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:48.787 ' 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=3459454 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 3459454 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3459454 ']' 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.787 17:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:48.787 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:48.787 [2024-11-05 17:04:55.054113] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:48.787 [2024-11-05 17:04:55.055249] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:40:48.787 [2024-11-05 17:04:55.055302] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.787 [2024-11-05 17:04:55.156569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:48.787 [2024-11-05 17:04:55.209003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.787 [2024-11-05 17:04:55.209055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.787 [2024-11-05 17:04:55.209064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.787 [2024-11-05 17:04:55.209072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.787 [2024-11-05 17:04:55.209078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.787 [2024-11-05 17:04:55.211420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:48.787 [2024-11-05 17:04:55.211579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:48.787 [2024-11-05 17:04:55.211737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:48.787 [2024-11-05 17:04:55.211737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.787 [2024-11-05 17:04:55.288168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:48.787 [2024-11-05 17:04:55.288190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:48.787 [2024-11-05 17:04:55.288987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:48.787 [2024-11-05 17:04:55.289106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:48.787 [2024-11-05 17:04:55.289410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:49.048 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.049 [2024-11-05 17:04:55.920888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.049 Malloc0 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.049 17:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:49.049 [2024-11-05 17:04:56.013175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:49.049 { 00:40:49.049 "params": { 00:40:49.049 "name": "Nvme$subsystem", 00:40:49.049 "trtype": "$TEST_TRANSPORT", 00:40:49.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:49.049 "adrfam": "ipv4", 00:40:49.049 "trsvcid": "$NVMF_PORT", 00:40:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:49.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:49.049 "hdgst": ${hdgst:-false}, 00:40:49.049 "ddgst": ${ddgst:-false} 00:40:49.049 }, 00:40:49.049 "method": "bdev_nvme_attach_controller" 00:40:49.049 } 00:40:49.049 EOF 00:40:49.049 )") 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:40:49.049 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:49.049 "params": { 00:40:49.049 "name": "Nvme1", 00:40:49.049 "trtype": "tcp", 00:40:49.049 "traddr": "10.0.0.2", 00:40:49.049 "adrfam": "ipv4", 00:40:49.049 "trsvcid": "4420", 00:40:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:49.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:49.049 "hdgst": false, 00:40:49.049 "ddgst": false 00:40:49.049 }, 00:40:49.049 "method": "bdev_nvme_attach_controller" 00:40:49.049 }' 00:40:49.049 [2024-11-05 17:04:56.070529] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:40:49.049 [2024-11-05 17:04:56.070596] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459781 ] 00:40:49.309 [2024-11-05 17:04:56.148430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:49.309 [2024-11-05 17:04:56.193396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:49.310 [2024-11-05 17:04:56.193514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:49.310 [2024-11-05 17:04:56.193518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.569 I/O targets: 00:40:49.569 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:49.569 00:40:49.569 00:40:49.569 CUnit - A unit testing framework for C - Version 2.1-3 00:40:49.569 http://cunit.sourceforge.net/ 00:40:49.569 00:40:49.569 00:40:49.569 Suite: bdevio tests on: Nvme1n1 00:40:49.569 Test: blockdev write read block ...passed 00:40:49.569 Test: blockdev write zeroes read block ...passed 00:40:49.569 Test: blockdev write zeroes read no split ...passed 00:40:49.569 Test: blockdev write zeroes read split ...passed 00:40:49.569 Test: blockdev write zeroes read split partial ...passed 00:40:49.569 Test: blockdev reset ...[2024-11-05 17:04:56.579601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:49.569 [2024-11-05 17:04:56.579665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250a970 (9): Bad file descriptor 00:40:49.569 [2024-11-05 17:04:56.627559] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:49.569 passed 00:40:49.569 Test: blockdev write read 8 blocks ...passed 00:40:49.569 Test: blockdev write read size > 128k ...passed 00:40:49.569 Test: blockdev write read invalid size ...passed 00:40:49.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:49.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:49.829 Test: blockdev write read max offset ...passed 00:40:49.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:49.829 Test: blockdev writev readv 8 blocks ...passed 00:40:49.829 Test: blockdev writev readv 30 x 1block ...passed 00:40:49.829 Test: blockdev writev readv block ...passed 00:40:49.829 Test: blockdev writev readv size > 128k ...passed 00:40:49.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:49.829 Test: blockdev comparev and writev ...[2024-11-05 17:04:56.810775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.810801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.810813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.810819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.811336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.811345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.811354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.811360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.811898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.811907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.811917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.811922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.812473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:49.829 [2024-11-05 17:04:56.812495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:49.829 [2024-11-05 17:04:56.812501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:49.829 passed 00:40:50.088 Test: blockdev nvme passthru rw ...passed 00:40:50.088 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:04:56.897604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:50.088 [2024-11-05 17:04:56.897617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:50.088 [2024-11-05 17:04:56.897964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:50.088 [2024-11-05 17:04:56.897974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:50.088 [2024-11-05 17:04:56.898283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:50.088 [2024-11-05 17:04:56.898291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:50.088 [2024-11-05 17:04:56.898608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:50.088 [2024-11-05 17:04:56.898617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:50.088 passed 00:40:50.088 Test: blockdev nvme admin passthru ...passed 00:40:50.088 Test: blockdev copy ...passed 00:40:50.088 00:40:50.089 Run Summary: Type Total Ran Passed Failed Inactive 00:40:50.089 suites 1 1 n/a 0 0 00:40:50.089 tests 23 23 23 0 0 00:40:50.089 asserts 152 152 152 0 n/a 00:40:50.089 00:40:50.089 Elapsed time = 1.008 seconds 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:50.089 rmmod nvme_tcp 00:40:50.089 rmmod nvme_fabrics 00:40:50.089 rmmod nvme_keyring 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 3459454 ']' 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 3459454 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3459454 ']' 00:40:50.089 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3459454 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3459454 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3459454' 00:40:50.349 killing process with pid 3459454 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3459454 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3459454 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:50.349 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:52.894 00:40:52.894 real 0m12.243s 00:40:52.894 user 0m9.291s 00:40:52.894 sys 0m6.531s 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:52.894 ************************************ 00:40:52.894 END TEST nvmf_bdevio 00:40:52.894 ************************************ 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:52.894 00:40:52.894 real 4m58.992s 00:40:52.894 user 10m15.653s 00:40:52.894 sys 2m2.247s 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:52.894 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:52.895 ************************************ 00:40:52.895 END TEST nvmf_target_core_interrupt_mode 00:40:52.895 ************************************ 00:40:52.895 17:04:59 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:52.895 17:04:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:52.895 17:04:59 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:52.895 17:04:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:52.895 ************************************ 00:40:52.895 START TEST nvmf_interrupt 00:40:52.895 ************************************ 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:52.895 * Looking for test storage... 00:40:52.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:52.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.895 --rc genhtml_branch_coverage=1 00:40:52.895 --rc genhtml_function_coverage=1 00:40:52.895 --rc genhtml_legend=1 00:40:52.895 --rc geninfo_all_blocks=1 00:40:52.895 --rc geninfo_unexecuted_blocks=1 00:40:52.895 00:40:52.895 ' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:52.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.895 --rc genhtml_branch_coverage=1 00:40:52.895 --rc genhtml_function_coverage=1 00:40:52.895 --rc genhtml_legend=1 00:40:52.895 --rc geninfo_all_blocks=1 00:40:52.895 --rc geninfo_unexecuted_blocks=1 00:40:52.895 00:40:52.895 ' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:52.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.895 --rc genhtml_branch_coverage=1 00:40:52.895 --rc genhtml_function_coverage=1 00:40:52.895 --rc genhtml_legend=1 00:40:52.895 --rc geninfo_all_blocks=1 00:40:52.895 --rc geninfo_unexecuted_blocks=1 00:40:52.895 00:40:52.895 ' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:52.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.895 --rc genhtml_branch_coverage=1 00:40:52.895 --rc genhtml_function_coverage=1 00:40:52.895 --rc genhtml_legend=1 00:40:52.895 --rc geninfo_all_blocks=1 00:40:52.895 --rc geninfo_unexecuted_blocks=1 00:40:52.895 00:40:52.895 ' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:52.895 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:40:52.896 17:04:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:41:01.036 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:01.037 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:01.037 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:01.037 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:01.037 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@247 -- # create_target_ns 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:01.037 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:41:01.038 10.0.0.1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:41:01.038 10.0.0.2 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:01.038 17:05:06 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:01.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:01.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.557 ms 00:41:01.038 00:41:01.038 --- 10.0.0.1 ping statistics --- 00:41:01.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.038 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:01.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:01.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:41:01.038 00:41:01.038 --- 10.0.0.2 ping statistics --- 00:41:01.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.038 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:01.038 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:01.039 ' 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=3464157 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 3464157 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3464157 ']' 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 [2024-11-05 17:05:07.189333] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:01.039 [2024-11-05 17:05:07.190429] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:41:01.039 [2024-11-05 17:05:07.190477] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:01.039 [2024-11-05 17:05:07.268638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:01.039 [2024-11-05 17:05:07.304723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:01.039 [2024-11-05 17:05:07.304760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:01.039 [2024-11-05 17:05:07.304768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:01.039 [2024-11-05 17:05:07.304775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:01.039 [2024-11-05 17:05:07.304781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:01.039 [2024-11-05 17:05:07.305836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:01.039 [2024-11-05 17:05:07.305838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.039 [2024-11-05 17:05:07.360197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:01.039 [2024-11-05 17:05:07.360697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:01.039 [2024-11-05 17:05:07.361060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:01.039 5000+0 records in 00:41:01.039 5000+0 records out 00:41:01.039 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184904 s, 554 MB/s 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 AIO0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 [2024-11-05 17:05:07.510405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.039 [2024-11-05 17:05:07.551073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3464157 0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3464157 0 idle 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.039 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464157 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:00.22 reactor_0' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464157 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:00.22 reactor_0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3464157 1 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3464157 1 idle 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464161 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:00.00 reactor_1' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464161 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:00.00 reactor_1 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3464305 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3464157 0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3464157 0 busy 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:01.040 17:05:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464157 root 20 0 128.2g 47232 34560 R 86.7 0.0 0:00.35 reactor_0' 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464157 root 20 0 128.2g 47232 34560 R 86.7 0.0 0:00.35 reactor_0 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3464157 1 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3464157 1 busy 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464161 root 20 0 128.2g 47232 34560 R 93.3 0.0 0:00.25 reactor_1' 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464161 root 20 0 128.2g 47232 34560 R 93.3 0.0 0:00.25 reactor_1 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:01.301 17:05:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3464305 00:41:11.299 Initializing NVMe Controllers 00:41:11.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:11.299 Controller IO queue size 256, less than required. 00:41:11.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:11.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:11.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:11.299 Initialization complete. Launching workers. 00:41:11.299 ======================================================== 00:41:11.299 Latency(us) 00:41:11.299 Device Information : IOPS MiB/s Average min max 00:41:11.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16613.40 64.90 15419.55 2412.37 55793.18 00:41:11.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19349.30 75.58 13232.81 7569.23 50966.34 00:41:11.299 ======================================================== 00:41:11.299 Total : 35962.70 140.48 14243.00 2412.37 55793.18 00:41:11.299 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3464157 0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3464157 0 idle 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464157 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:20.21 reactor_0' 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464157 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:20.21 reactor_0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3464157 1 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3464157 1 idle 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:11.299 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464161 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:10.00 reactor_1' 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464161 root 20 0 128.2g 47232 34560 S 0.0 0.0 0:10.00 reactor_1 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:11.560 17:05:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:12.130 17:05:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:12.130 17:05:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:41:12.130 17:05:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:41:12.130 17:05:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:41:12.130 17:05:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3464157 0 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3464157 0 idle 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:14.041 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464157 root 20 0 128.2g 81792 34560 S 0.0 0.1 0:20.46 reactor_0' 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464157 root 20 0 128.2g 81792 34560 S 0.0 0.1 0:20.46 reactor_0 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3464157 1 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3464157 1 idle 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3464157 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3464157 -w 256 00:41:14.302 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3464161 root 20 0 128.2g 81792 34560 S 0.0 0.1 0:10.14 reactor_1' 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3464161 root 20 0 128.2g 81792 34560 S 0.0 0.1 0:10.14 reactor_1 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:14.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:41:14.562 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:14.823 rmmod nvme_tcp 00:41:14.823 rmmod nvme_fabrics 00:41:14.823 rmmod nvme_keyring 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 3464157 ']' 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 3464157 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3464157 ']' 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3464157 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3464157 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3464157' 00:41:14.823 killing process with pid 3464157 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3464157 00:41:14.823 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3464157 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@254 -- # local dev 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:41:15.083 17:05:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # return 0 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:16.991 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:41:16.992 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:41:16.992 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@274 -- # iptr 00:41:16.992 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-save 00:41:16.992 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:16.992 17:05:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-restore 00:41:16.992 00:41:16.992 real 0m24.403s 00:41:16.992 user 0m40.193s 00:41:16.992 sys 0m9.289s 00:41:16.992 17:05:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:16.992 17:05:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.992 ************************************ 00:41:16.992 END TEST nvmf_interrupt 00:41:16.992 ************************************ 00:41:16.992 00:41:16.992 real 29m58.424s 00:41:16.992 user 61m11.757s 00:41:16.992 sys 10m3.247s 00:41:16.992 17:05:24 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:16.992 17:05:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:16.992 ************************************ 00:41:16.992 END TEST nvmf_tcp 00:41:16.992 ************************************ 00:41:17.252 17:05:24 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:41:17.252 17:05:24 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:17.252 17:05:24 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:41:17.252 17:05:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:17.252 17:05:24 -- common/autotest_common.sh@10 -- # set +x 00:41:17.252 ************************************ 00:41:17.252 START TEST spdkcli_nvmf_tcp 00:41:17.252 ************************************ 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:17.252 * Looking for test storage... 00:41:17.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:17.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.252 --rc genhtml_branch_coverage=1 00:41:17.252 --rc genhtml_function_coverage=1 00:41:17.252 --rc genhtml_legend=1 00:41:17.252 --rc geninfo_all_blocks=1 00:41:17.252 --rc geninfo_unexecuted_blocks=1 00:41:17.252 00:41:17.252 ' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:17.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.252 --rc genhtml_branch_coverage=1 00:41:17.252 --rc genhtml_function_coverage=1 00:41:17.252 --rc genhtml_legend=1 00:41:17.252 --rc geninfo_all_blocks=1 00:41:17.252 --rc geninfo_unexecuted_blocks=1 00:41:17.252 00:41:17.252 ' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:17.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.252 --rc genhtml_branch_coverage=1 00:41:17.252 --rc genhtml_function_coverage=1 00:41:17.252 --rc genhtml_legend=1 00:41:17.252 --rc geninfo_all_blocks=1 00:41:17.252 --rc geninfo_unexecuted_blocks=1 00:41:17.252 00:41:17.252 ' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:17.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:17.252 --rc genhtml_branch_coverage=1 00:41:17.252 --rc genhtml_function_coverage=1 00:41:17.252 --rc genhtml_legend=1 00:41:17.252 --rc geninfo_all_blocks=1 00:41:17.252 --rc geninfo_unexecuted_blocks=1 00:41:17.252 00:41:17.252 ' 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:17.252 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:41:17.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3467620 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3467620 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3467620 ']' 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:17.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:17.513 17:05:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:17.513 [2024-11-05 17:05:24.415997] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:41:17.513 [2024-11-05 17:05:24.416055] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467620 ] 00:41:17.513 [2024-11-05 17:05:24.488479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:17.513 [2024-11-05 17:05:24.525865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:17.513 [2024-11-05 17:05:24.525989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.144 17:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:18.144 17:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:41:18.144 17:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:18.144 17:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:18.144 17:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.447 17:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:18.447 17:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:18.447 17:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:18.447 17:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:18.447 17:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.447 17:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:18.447 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:18.447 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:18.447 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:18.447 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:18.447 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:18.447 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:18.447 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:18.447 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:18.447 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:18.447 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:18.447 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:18.447 ' 00:41:20.989 [2024-11-05 17:05:27.737386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:21.931 [2024-11-05 17:05:28.945322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:24.472 [2024-11-05 17:05:31.163951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:26.385 [2024-11-05 17:05:33.069604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:27.767 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:27.767 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:27.767 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:27.767 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:27.767 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:27.767 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:27.767 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:27.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:27.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:27.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:27.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:27.767 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:27.767 17:05:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:28.028 17:05:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.289 17:05:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:28.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:28.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:28.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:28.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:28.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:28.289 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:28.289 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:28.289 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:28.289 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:28.289 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:28.289 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:28.289 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:28.289 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:28.289 ' 00:41:33.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:33.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:33.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:33.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:33.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:33.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:33.573 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:33.573 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:33.573 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:33.573 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:33.573 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:33.573 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:33.573 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:33.573 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:33.573 17:05:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:33.573 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:33.573 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3467620 ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3467620' 00:41:33.834 killing process with pid 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3467620 ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3467620 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3467620 ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3467620 00:41:33.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3467620) - No such process 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3467620 is not found' 00:41:33.834 Process with pid 3467620 is not found 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:33.834 00:41:33.834 real 0m16.723s 00:41:33.834 user 0m35.379s 00:41:33.834 sys 0m0.728s 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:33.834 17:05:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.834 ************************************ 00:41:33.834 END TEST spdkcli_nvmf_tcp 00:41:33.834 ************************************ 00:41:33.834 17:05:40 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:33.834 17:05:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:41:33.834 17:05:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:33.834 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:41:34.095 ************************************ 00:41:34.095 START TEST nvmf_identify_passthru 00:41:34.095 ************************************ 00:41:34.095 17:05:40 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:34.095 * Looking for test storage... 00:41:34.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.095 --rc genhtml_branch_coverage=1 00:41:34.095 --rc genhtml_function_coverage=1 00:41:34.095 --rc genhtml_legend=1 00:41:34.095 --rc geninfo_all_blocks=1 00:41:34.095 --rc geninfo_unexecuted_blocks=1 00:41:34.095 00:41:34.095 ' 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.095 --rc genhtml_branch_coverage=1 00:41:34.095 --rc genhtml_function_coverage=1 00:41:34.095 --rc genhtml_legend=1 00:41:34.095 --rc geninfo_all_blocks=1 00:41:34.095 --rc geninfo_unexecuted_blocks=1 00:41:34.095 00:41:34.095 ' 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.095 --rc genhtml_branch_coverage=1 00:41:34.095 --rc genhtml_function_coverage=1 00:41:34.095 --rc genhtml_legend=1 00:41:34.095 --rc geninfo_all_blocks=1 00:41:34.095 --rc geninfo_unexecuted_blocks=1 00:41:34.095 00:41:34.095 ' 00:41:34.095 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.095 --rc genhtml_branch_coverage=1 00:41:34.095 --rc genhtml_function_coverage=1 00:41:34.095 --rc genhtml_legend=1 00:41:34.095 --rc geninfo_all_blocks=1 00:41:34.095 --rc geninfo_unexecuted_blocks=1 00:41:34.095 00:41:34.095 ' 00:41:34.095 17:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.095 17:05:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.095 17:05:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.095 17:05:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.095 17:05:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:34.095 17:05:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:41:34.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:34.095 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:34.095 17:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.095 17:05:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.095 17:05:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.096 17:05:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.096 17:05:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.096 17:05:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:34.096 17:05:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.096 17:05:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:34.096 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:41:34.096 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:41:34.096 17:05:41 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:41:34.096 17:05:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.235 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:42.235 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:41:42.235 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:41:42.235 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:41:42.235 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:41:42.235 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:42.236 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:42.236 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:42.236 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:42.236 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@247 -- # create_target_ns 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:41:42.236 10.0.0.1 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:41:42.236 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:42.237 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:42.237 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:41:42.237 17:05:47 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:41:42.237 10.0.0.2 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:42.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:42.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.606 ms 00:41:42.237 00:41:42.237 --- 10.0.0.1 ping statistics --- 00:41:42.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.237 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:42.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:42.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:41:42.237 00:41:42.237 --- 10.0.0.2 ping statistics --- 00:41:42.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.237 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:42.237 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:42.238 ' 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:42.238 17:05:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:41:42.238 17:05:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:42.238 17:05:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3474664 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3474664 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3474664 ']' 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:42.499 17:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.499 17:05:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:42.499 [2024-11-05 17:05:49.423738] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:41:42.499 [2024-11-05 17:05:49.423801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:42.499 [2024-11-05 17:05:49.500332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:42.499 [2024-11-05 17:05:49.537280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:42.499 [2024-11-05 17:05:49.537315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:42.499 [2024-11-05 17:05:49.537323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:42.499 [2024-11-05 17:05:49.537330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:42.499 [2024-11-05 17:05:49.537335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:42.499 [2024-11-05 17:05:49.539084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.499 [2024-11-05 17:05:49.539197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:42.499 [2024-11-05 17:05:49.539355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.499 [2024-11-05 17:05:49.539355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:43.451 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:43.451 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:41:43.451 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:43.451 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.451 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.451 INFO: Log level set to 20 00:41:43.451 INFO: Requests: 00:41:43.451 { 00:41:43.451 "jsonrpc": "2.0", 00:41:43.451 "method": "nvmf_set_config", 00:41:43.452 "id": 1, 00:41:43.452 "params": { 00:41:43.452 "admin_cmd_passthru": { 00:41:43.452 "identify_ctrlr": true 00:41:43.452 } 00:41:43.452 } 00:41:43.452 } 00:41:43.452 00:41:43.452 INFO: response: 00:41:43.452 { 00:41:43.452 "jsonrpc": "2.0", 00:41:43.452 "id": 1, 00:41:43.452 "result": true 00:41:43.452 } 00:41:43.452 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.452 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.452 INFO: Setting log level to 20 00:41:43.452 INFO: Setting log level to 20 00:41:43.452 INFO: Log level set to 20 00:41:43.452 INFO: Log level set to 20 00:41:43.452 INFO: Requests: 00:41:43.452 { 00:41:43.452 "jsonrpc": "2.0", 00:41:43.452 "method": "framework_start_init", 00:41:43.452 "id": 1 00:41:43.452 } 00:41:43.452 00:41:43.452 INFO: Requests: 00:41:43.452 { 00:41:43.452 "jsonrpc": "2.0", 00:41:43.452 "method": "framework_start_init", 00:41:43.452 "id": 1 00:41:43.452 } 00:41:43.452 00:41:43.452 [2024-11-05 17:05:50.291223] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:43.452 INFO: response: 00:41:43.452 { 00:41:43.452 "jsonrpc": "2.0", 00:41:43.452 "id": 1, 00:41:43.452 "result": true 00:41:43.452 } 00:41:43.452 00:41:43.452 INFO: response: 00:41:43.452 { 00:41:43.452 "jsonrpc": "2.0", 00:41:43.452 "id": 1, 00:41:43.452 "result": true 00:41:43.452 } 00:41:43.452 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.452 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.452 INFO: Setting log level to 40 00:41:43.452 INFO: Setting log level to 40 00:41:43.452 INFO: Setting log level to 40 00:41:43.452 [2024-11-05 17:05:50.304549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.452 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.452 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.452 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.712 Nvme0n1 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.712 [2024-11-05 17:05:50.701225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:43.712 [ 00:41:43.712 { 00:41:43.712 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:43.712 "subtype": "Discovery", 00:41:43.712 "listen_addresses": [], 00:41:43.712 "allow_any_host": true, 00:41:43.712 "hosts": [] 00:41:43.712 }, 00:41:43.712 { 00:41:43.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:43.712 "subtype": "NVMe", 00:41:43.712 "listen_addresses": [ 00:41:43.712 { 00:41:43.712 "trtype": "TCP", 00:41:43.712 "adrfam": "IPv4", 00:41:43.712 "traddr": "10.0.0.2", 00:41:43.712 "trsvcid": "4420" 00:41:43.712 } 00:41:43.712 ], 00:41:43.712 "allow_any_host": true, 00:41:43.712 "hosts": [], 00:41:43.712 "serial_number": "SPDK00000000000001", 00:41:43.712 "model_number": "SPDK bdev Controller", 00:41:43.712 "max_namespaces": 1, 00:41:43.712 "min_cntlid": 1, 00:41:43.712 "max_cntlid": 65519, 00:41:43.712 "namespaces": [ 00:41:43.712 { 00:41:43.712 "nsid": 1, 00:41:43.712 "bdev_name": "Nvme0n1", 00:41:43.712 "name": "Nvme0n1", 00:41:43.712 "nguid": "36344730526054870025384500000044", 00:41:43.712 "uuid": "36344730-5260-5487-0025-384500000044" 00:41:43.712 } 00:41:43.712 ] 00:41:43.712 } 00:41:43.712 ] 00:41:43.712 17:05:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:43.712 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:43.972 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:41:43.972 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:43.972 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:43.972 17:05:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:44.232 17:05:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:41:44.232 17:05:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:41:44.232 17:05:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:41:44.232 17:05:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.232 17:05:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:44.232 17:05:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:44.232 rmmod nvme_tcp 00:41:44.232 rmmod nvme_fabrics 00:41:44.232 rmmod nvme_keyring 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 3474664 ']' 00:41:44.232 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 3474664 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3474664 ']' 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3474664 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:44.232 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3474664 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3474664' 00:41:44.492 killing process with pid 3474664 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3474664 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3474664 00:41:44.492 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:44.492 17:05:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:41:44.492 17:05:51 nvmf_identify_passthru -- nvmf/setup.sh@254 -- # local dev 00:41:44.492 17:05:51 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:44.492 17:05:51 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:41:44.492 17:05:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # return 0 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/setup.sh@274 -- # iptr 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-save 00:41:47.035 17:05:53 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-restore 00:41:47.035 00:41:47.035 real 0m12.704s 00:41:47.035 user 0m10.237s 00:41:47.035 sys 0m6.269s 00:41:47.035 17:05:53 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:47.035 17:05:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:47.035 ************************************ 00:41:47.035 END TEST nvmf_identify_passthru 00:41:47.035 ************************************ 00:41:47.035 17:05:53 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:47.035 17:05:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:47.035 17:05:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:47.035 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:41:47.035 ************************************ 00:41:47.035 START TEST nvmf_dif 00:41:47.035 ************************************ 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:47.035 * Looking for test storage... 00:41:47.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:47.035 17:05:53 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:47.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.035 --rc genhtml_branch_coverage=1 00:41:47.035 --rc genhtml_function_coverage=1 00:41:47.035 --rc genhtml_legend=1 00:41:47.035 --rc geninfo_all_blocks=1 00:41:47.035 --rc geninfo_unexecuted_blocks=1 00:41:47.035 00:41:47.035 ' 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:47.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.035 --rc genhtml_branch_coverage=1 00:41:47.035 --rc genhtml_function_coverage=1 00:41:47.035 --rc genhtml_legend=1 00:41:47.035 --rc geninfo_all_blocks=1 00:41:47.035 --rc geninfo_unexecuted_blocks=1 00:41:47.035 00:41:47.035 ' 00:41:47.035 17:05:53 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:47.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.035 --rc genhtml_branch_coverage=1 00:41:47.035 --rc genhtml_function_coverage=1 00:41:47.035 --rc genhtml_legend=1 00:41:47.036 --rc geninfo_all_blocks=1 00:41:47.036 --rc geninfo_unexecuted_blocks=1 00:41:47.036 00:41:47.036 ' 00:41:47.036 17:05:53 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:47.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:47.036 --rc genhtml_branch_coverage=1 00:41:47.036 --rc genhtml_function_coverage=1 00:41:47.036 --rc genhtml_legend=1 00:41:47.036 --rc geninfo_all_blocks=1 00:41:47.036 --rc geninfo_unexecuted_blocks=1 00:41:47.036 00:41:47.036 ' 00:41:47.036 17:05:53 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:47.036 17:05:53 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:47.036 17:05:53 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:47.036 17:05:53 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:47.036 17:05:53 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:47.036 17:05:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.036 17:05:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.036 17:05:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.036 17:05:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:47.036 17:05:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:41:47.036 17:05:53 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:47.036 17:05:53 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:47.036 17:05:53 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:41:47.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:47.036 17:05:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:47.036 17:05:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:47.036 17:05:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:47.036 17:05:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:47.036 17:05:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:41:47.036 17:05:53 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:47.036 17:05:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:41:47.036 17:05:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:41:47.036 17:05:53 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:41:47.036 17:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:55.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:55.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:55.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:55.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:41:55.178 17:06:00 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@247 -- # create_target_ns 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:41:55.179 10.0.0.1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:41:55.179 10.0.0.2 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:41:55.179 17:06:00 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:41:55.179 17:06:01 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:55.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:55.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.589 ms 00:41:55.179 00:41:55.179 --- 10.0.0.1 ping statistics --- 00:41:55.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.179 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:55.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:55.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:41:55.179 00:41:55.179 --- 10.0.0.2 ping statistics --- 00:41:55.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.179 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:55.179 17:06:01 nvmf_dif -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:55.179 17:06:01 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:41:55.180 17:06:01 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:41:55.180 17:06:01 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:57.726 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:41:57.726 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:57.726 17:06:04 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:57.726 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:41:57.727 17:06:04 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:57.988 ' 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:57.988 17:06:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:57.988 17:06:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=3480797 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 3480797 00:41:57.988 17:06:04 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3480797 ']' 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:57.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:57.988 17:06:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:57.988 [2024-11-05 17:06:04.936490] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:41:57.988 [2024-11-05 17:06:04.936553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:57.988 [2024-11-05 17:06:05.021283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.248 [2024-11-05 17:06:05.061972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:58.248 [2024-11-05 17:06:05.062009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:58.248 [2024-11-05 17:06:05.062017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:58.248 [2024-11-05 17:06:05.062024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:58.248 [2024-11-05 17:06:05.062030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:58.248 [2024-11-05 17:06:05.062614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:41:58.821 17:06:05 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 17:06:05 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:58.821 17:06:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:58.821 17:06:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 [2024-11-05 17:06:05.788086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:58.821 17:06:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 ************************************ 00:41:58.821 START TEST fio_dif_1_default 00:41:58.821 ************************************ 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 bdev_null0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.821 [2024-11-05 17:06:05.872436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:58.821 { 00:41:58.821 "params": { 00:41:58.821 "name": "Nvme$subsystem", 00:41:58.821 "trtype": "$TEST_TRANSPORT", 00:41:58.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:58.821 "adrfam": "ipv4", 00:41:58.821 "trsvcid": "$NVMF_PORT", 00:41:58.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:58.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:58.821 "hdgst": ${hdgst:-false}, 00:41:58.821 "ddgst": ${ddgst:-false} 00:41:58.821 }, 00:41:58.821 "method": "bdev_nvme_attach_controller" 00:41:58.821 } 00:41:58.821 EOF 00:41:58.821 )") 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:58.821 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:59.082 "params": { 00:41:59.082 "name": "Nvme0", 00:41:59.082 "trtype": "tcp", 00:41:59.082 "traddr": "10.0.0.2", 00:41:59.082 "adrfam": "ipv4", 00:41:59.082 "trsvcid": "4420", 00:41:59.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:59.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:59.082 "hdgst": false, 00:41:59.082 "ddgst": false 00:41:59.082 }, 00:41:59.082 "method": "bdev_nvme_attach_controller" 00:41:59.082 }' 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:59.082 17:06:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:59.343 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:59.343 fio-3.35 00:41:59.343 Starting 1 thread 00:42:11.568 00:42:11.568 filename0: (groupid=0, jobs=1): err= 0: pid=3481332: Tue Nov 5 17:06:16 2024 00:42:11.568 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10003msec) 00:42:11.568 slat (nsec): min=5377, max=31313, avg=6223.57, stdev=1571.64 00:42:11.568 clat (usec): min=912, max=43417, avg=40976.67, stdev=2604.76 00:42:11.568 lat (usec): min=917, max=43448, avg=40982.90, stdev=2604.85 00:42:11.568 clat percentiles (usec): 00:42:11.568 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:11.568 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:11.568 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:42:11.568 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:42:11.568 | 99.99th=[43254] 00:42:11.568 bw ( KiB/s): min= 384, max= 416, per=99.67%, avg=389.05, stdev=11.99, samples=19 00:42:11.568 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:42:11.568 lat (usec) : 1000=0.41% 00:42:11.568 lat (msec) : 50=99.59% 00:42:11.568 cpu : usr=93.78%, sys=6.01%, ctx=10, majf=0, minf=219 00:42:11.568 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:11.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:11.568 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:11.568 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:11.568 00:42:11.568 Run status group 0 (all jobs): 00:42:11.568 READ: bw=390KiB/s (400kB/s), 390KiB/s-390KiB/s (400kB/s-400kB/s), io=3904KiB (3998kB), run=10003-10003msec 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 00:42:11.568 real 0m11.249s 00:42:11.568 user 0m24.303s 00:42:11.568 sys 0m0.933s 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 ************************************ 00:42:11.568 END TEST fio_dif_1_default 00:42:11.568 ************************************ 00:42:11.568 17:06:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:11.568 17:06:17 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:11.568 17:06:17 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 ************************************ 00:42:11.568 START TEST fio_dif_1_multi_subsystems 00:42:11.568 ************************************ 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 bdev_null0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 [2024-11-05 17:06:17.200459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 bdev_null1 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.568 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:11.569 { 00:42:11.569 "params": { 00:42:11.569 "name": "Nvme$subsystem", 00:42:11.569 "trtype": "$TEST_TRANSPORT", 00:42:11.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:11.569 "adrfam": "ipv4", 00:42:11.569 "trsvcid": "$NVMF_PORT", 00:42:11.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:11.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:11.569 "hdgst": ${hdgst:-false}, 00:42:11.569 "ddgst": ${ddgst:-false} 00:42:11.569 }, 00:42:11.569 "method": "bdev_nvme_attach_controller" 00:42:11.569 } 00:42:11.569 EOF 00:42:11.569 )") 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:11.569 { 00:42:11.569 "params": { 00:42:11.569 "name": "Nvme$subsystem", 00:42:11.569 "trtype": "$TEST_TRANSPORT", 00:42:11.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:11.569 "adrfam": "ipv4", 00:42:11.569 "trsvcid": "$NVMF_PORT", 00:42:11.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:11.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:11.569 "hdgst": ${hdgst:-false}, 00:42:11.569 "ddgst": ${ddgst:-false} 00:42:11.569 }, 00:42:11.569 "method": "bdev_nvme_attach_controller" 00:42:11.569 } 00:42:11.569 EOF 00:42:11.569 )") 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:11.569 "params": { 00:42:11.569 "name": "Nvme0", 00:42:11.569 "trtype": "tcp", 00:42:11.569 "traddr": "10.0.0.2", 00:42:11.569 "adrfam": "ipv4", 00:42:11.569 "trsvcid": "4420", 00:42:11.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:11.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:11.569 "hdgst": false, 00:42:11.569 "ddgst": false 00:42:11.569 }, 00:42:11.569 "method": "bdev_nvme_attach_controller" 00:42:11.569 },{ 00:42:11.569 "params": { 00:42:11.569 "name": "Nvme1", 00:42:11.569 "trtype": "tcp", 00:42:11.569 "traddr": "10.0.0.2", 00:42:11.569 "adrfam": "ipv4", 00:42:11.569 "trsvcid": "4420", 00:42:11.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:11.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:11.569 "hdgst": false, 00:42:11.569 "ddgst": false 00:42:11.569 }, 00:42:11.569 "method": "bdev_nvme_attach_controller" 00:42:11.569 }' 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:11.569 17:06:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:11.569 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:11.569 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:11.569 fio-3.35 00:42:11.569 Starting 2 threads 00:42:21.556 00:42:21.556 filename0: (groupid=0, jobs=1): err= 0: pid=3484274: Tue Nov 5 17:06:28 2024 00:42:21.556 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10008msec) 00:42:21.556 slat (nsec): min=5389, max=43688, avg=6665.50, stdev=1989.46 00:42:21.556 clat (usec): min=40882, max=43564, avg=41335.53, stdev=536.97 00:42:21.556 lat (usec): min=40890, max=43598, avg=41342.20, stdev=536.94 00:42:21.556 clat percentiles (usec): 00:42:21.556 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:21.556 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:21.556 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:21.556 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:42:21.556 | 99.99th=[43779] 00:42:21.556 bw ( KiB/s): min= 384, max= 416, per=49.55%, avg=385.60, stdev= 7.16, samples=20 00:42:21.556 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:42:21.556 lat (msec) : 50=100.00% 00:42:21.556 cpu : usr=95.62%, sys=4.16%, ctx=47, majf=0, minf=149 00:42:21.556 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.556 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.556 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:21.556 filename1: (groupid=0, jobs=1): err= 0: pid=3484275: Tue Nov 5 17:06:28 2024 00:42:21.556 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10006msec) 00:42:21.556 slat (nsec): min=5394, max=35382, avg=6236.70, stdev=1502.59 00:42:21.556 clat (usec): min=40834, max=41991, avg=40988.77, stdev=76.55 00:42:21.556 lat (usec): min=40842, max=42000, avg=40995.01, stdev=76.89 00:42:21.556 clat percentiles (usec): 00:42:21.556 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:21.556 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:21.556 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:21.556 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:42:21.556 | 99.99th=[42206] 00:42:21.556 bw ( KiB/s): min= 384, max= 416, per=49.94%, avg=388.80, stdev=11.72, samples=20 00:42:21.556 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:21.556 lat (msec) : 50=100.00% 00:42:21.556 cpu : usr=96.05%, sys=3.75%, ctx=14, majf=0, minf=142 00:42:21.556 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.556 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.556 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:21.556 00:42:21.556 Run status group 0 (all jobs): 00:42:21.556 READ: bw=777KiB/s (796kB/s), 387KiB/s-390KiB/s (396kB/s-400kB/s), io=7776KiB (7963kB), run=10006-10008msec 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.556 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 00:42:21.818 real 0m11.499s 00:42:21.818 user 0m35.039s 00:42:21.818 sys 0m1.117s 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 ************************************ 00:42:21.818 END TEST fio_dif_1_multi_subsystems 00:42:21.818 ************************************ 00:42:21.818 17:06:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:21.818 17:06:28 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:21.818 17:06:28 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 ************************************ 00:42:21.818 START TEST fio_dif_rand_params 00:42:21.818 ************************************ 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 bdev_null0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.818 [2024-11-05 17:06:28.781967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.818 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:21.819 { 00:42:21.819 "params": { 00:42:21.819 "name": "Nvme$subsystem", 00:42:21.819 "trtype": "$TEST_TRANSPORT", 00:42:21.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.819 "adrfam": "ipv4", 00:42:21.819 "trsvcid": "$NVMF_PORT", 00:42:21.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.819 "hdgst": ${hdgst:-false}, 00:42:21.819 "ddgst": ${ddgst:-false} 00:42:21.819 }, 00:42:21.819 "method": "bdev_nvme_attach_controller" 00:42:21.819 } 00:42:21.819 EOF 00:42:21.819 )") 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:21.819 "params": { 00:42:21.819 "name": "Nvme0", 00:42:21.819 "trtype": "tcp", 00:42:21.819 "traddr": "10.0.0.2", 00:42:21.819 "adrfam": "ipv4", 00:42:21.819 "trsvcid": "4420", 00:42:21.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:21.819 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:21.819 "hdgst": false, 00:42:21.819 "ddgst": false 00:42:21.819 }, 00:42:21.819 "method": "bdev_nvme_attach_controller" 00:42:21.819 }' 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:21.819 17:06:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:22.430 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:22.430 ... 00:42:22.430 fio-3.35 00:42:22.430 Starting 3 threads 00:42:29.012 00:42:29.012 filename0: (groupid=0, jobs=1): err= 0: pid=3486506: Tue Nov 5 17:06:34 2024 00:42:29.012 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(151MiB/5049msec) 00:42:29.012 slat (nsec): min=5575, max=78469, avg=9606.35, stdev=3797.54 00:42:29.012 clat (usec): min=6065, max=53853, avg=12488.33, stdev=4205.19 00:42:29.012 lat (usec): min=6073, max=53862, avg=12497.94, stdev=4205.08 00:42:29.012 clat percentiles (usec): 00:42:29.012 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:42:29.012 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:42:29.012 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14615], 95.00th=[15270], 00:42:29.012 | 99.00th=[17171], 99.50th=[51119], 99.90th=[53740], 99.95th=[53740], 00:42:29.012 | 99.99th=[53740] 00:42:29.012 bw ( KiB/s): min=26624, max=35072, per=33.18%, avg=30873.60, stdev=2337.57, samples=10 00:42:29.012 iops : min= 208, max= 274, avg=241.20, stdev=18.26, samples=10 00:42:29.012 lat (msec) : 10=11.42%, 20=87.67%, 100=0.91% 00:42:29.012 cpu : usr=95.56%, sys=4.16%, ctx=13, majf=0, minf=70 00:42:29.012 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:29.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:29.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:29.012 issued rwts: total=1208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:29.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:29.012 filename0: (groupid=0, jobs=1): err= 0: pid=3486507: Tue Nov 5 17:06:34 2024 00:42:29.012 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(157MiB/5048msec) 00:42:29.012 slat (nsec): min=5594, max=90617, avg=7192.02, stdev=3263.68 00:42:29.012 clat (usec): min=6538, max=52804, avg=11994.25, stdev=3893.88 00:42:29.012 lat (usec): min=6549, max=52813, avg=12001.44, stdev=3894.03 00:42:29.012 clat percentiles (usec): 00:42:29.012 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10421], 00:42:29.012 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:42:29.012 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13960], 00:42:29.012 | 99.00th=[15795], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:42:29.012 | 99.99th=[52691] 00:42:29.012 bw ( KiB/s): min=24064, max=34304, per=34.53%, avg=32128.00, stdev=2941.83, samples=10 00:42:29.012 iops : min= 188, max= 268, avg=251.00, stdev=22.98, samples=10 00:42:29.012 lat (msec) : 10=13.67%, 20=85.45%, 50=0.48%, 100=0.40% 00:42:29.012 cpu : usr=94.47%, sys=5.25%, ctx=16, majf=0, minf=124 00:42:29.012 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:29.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:29.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:29.012 issued rwts: total=1258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:29.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:29.012 filename0: (groupid=0, jobs=1): err= 0: pid=3486508: Tue Nov 5 17:06:34 2024 00:42:29.012 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(151MiB/5046msec) 00:42:29.012 slat (nsec): min=5477, max=89476, avg=7197.15, stdev=3277.17 00:42:29.012 clat (usec): min=6208, max=52880, avg=12528.23, stdev=4409.04 00:42:29.012 lat (usec): min=6214, max=52888, avg=12535.43, stdev=4409.25 00:42:29.012 clat percentiles (usec): 00:42:29.012 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10552], 00:42:29.012 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:42:29.012 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14222], 95.00th=[14746], 00:42:29.012 | 99.00th=[48497], 99.50th=[50070], 99.90th=[52167], 99.95th=[52691], 00:42:29.012 | 99.99th=[52691] 00:42:29.012 bw ( KiB/s): min=26880, max=33024, per=33.07%, avg=30771.20, stdev=1743.81, samples=10 00:42:29.012 iops : min= 210, max= 258, avg=240.40, stdev=13.62, samples=10 00:42:29.012 lat (msec) : 10=12.29%, 20=86.54%, 50=0.33%, 100=0.83% 00:42:29.012 cpu : usr=95.26%, sys=4.46%, ctx=14, majf=0, minf=210 00:42:29.012 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:29.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:29.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:29.012 issued rwts: total=1204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:29.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:29.012 00:42:29.012 Run status group 0 (all jobs): 00:42:29.012 READ: bw=90.9MiB/s (95.3MB/s), 29.8MiB/s-31.2MiB/s (31.3MB/s-32.7MB/s), io=459MiB (481MB), run=5046-5049msec 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:29.012 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 bdev_null0 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 [2024-11-05 17:06:35.116892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 bdev_null1 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 bdev_null2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:29.013 { 00:42:29.013 "params": { 00:42:29.013 "name": "Nvme$subsystem", 00:42:29.013 "trtype": "$TEST_TRANSPORT", 00:42:29.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:29.013 "adrfam": "ipv4", 00:42:29.013 "trsvcid": "$NVMF_PORT", 00:42:29.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:29.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:29.013 "hdgst": ${hdgst:-false}, 00:42:29.013 "ddgst": ${ddgst:-false} 00:42:29.013 }, 00:42:29.013 "method": "bdev_nvme_attach_controller" 00:42:29.013 } 00:42:29.013 EOF 00:42:29.013 )") 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:29.013 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:29.014 { 00:42:29.014 "params": { 00:42:29.014 "name": "Nvme$subsystem", 00:42:29.014 "trtype": "$TEST_TRANSPORT", 00:42:29.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:29.014 "adrfam": "ipv4", 00:42:29.014 "trsvcid": "$NVMF_PORT", 00:42:29.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:29.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:29.014 "hdgst": ${hdgst:-false}, 00:42:29.014 "ddgst": ${ddgst:-false} 00:42:29.014 }, 00:42:29.014 "method": "bdev_nvme_attach_controller" 00:42:29.014 } 00:42:29.014 EOF 00:42:29.014 )") 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:29.014 { 00:42:29.014 "params": { 00:42:29.014 "name": "Nvme$subsystem", 00:42:29.014 "trtype": "$TEST_TRANSPORT", 00:42:29.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:29.014 "adrfam": "ipv4", 00:42:29.014 "trsvcid": "$NVMF_PORT", 00:42:29.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:29.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:29.014 "hdgst": ${hdgst:-false}, 00:42:29.014 "ddgst": ${ddgst:-false} 00:42:29.014 }, 00:42:29.014 "method": "bdev_nvme_attach_controller" 00:42:29.014 } 00:42:29.014 EOF 00:42:29.014 )") 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:29.014 "params": { 00:42:29.014 "name": "Nvme0", 00:42:29.014 "trtype": "tcp", 00:42:29.014 "traddr": "10.0.0.2", 00:42:29.014 "adrfam": "ipv4", 00:42:29.014 "trsvcid": "4420", 00:42:29.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:29.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:29.014 "hdgst": false, 00:42:29.014 "ddgst": false 00:42:29.014 }, 00:42:29.014 "method": "bdev_nvme_attach_controller" 00:42:29.014 },{ 00:42:29.014 "params": { 00:42:29.014 "name": "Nvme1", 00:42:29.014 "trtype": "tcp", 00:42:29.014 "traddr": "10.0.0.2", 00:42:29.014 "adrfam": "ipv4", 00:42:29.014 "trsvcid": "4420", 00:42:29.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:29.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:29.014 "hdgst": false, 00:42:29.014 "ddgst": false 00:42:29.014 }, 00:42:29.014 "method": "bdev_nvme_attach_controller" 00:42:29.014 },{ 00:42:29.014 "params": { 00:42:29.014 "name": "Nvme2", 00:42:29.014 "trtype": "tcp", 00:42:29.014 "traddr": "10.0.0.2", 00:42:29.014 "adrfam": "ipv4", 00:42:29.014 "trsvcid": "4420", 00:42:29.014 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:29.014 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:29.014 "hdgst": false, 00:42:29.014 "ddgst": false 00:42:29.014 }, 00:42:29.014 "method": "bdev_nvme_attach_controller" 00:42:29.014 }' 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:29.014 17:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:29.014 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:29.014 ... 00:42:29.014 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:29.014 ... 00:42:29.014 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:29.014 ... 00:42:29.014 fio-3.35 00:42:29.014 Starting 24 threads 00:42:41.231 00:42:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=3488017: Tue Nov 5 17:06:46 2024 00:42:41.231 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10028msec) 00:42:41.231 slat (nsec): min=5408, max=83699, avg=14442.83, stdev=12176.17 00:42:41.231 clat (usec): min=9215, max=48046, avg=31939.28, stdev=3280.73 00:42:41.231 lat (usec): min=9229, max=48051, avg=31953.72, stdev=3280.46 00:42:41.231 clat percentiles (usec): 00:42:41.231 | 1.00th=[16450], 5.00th=[24249], 10.00th=[31589], 20.00th=[32113], 00:42:41.231 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.231 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:41.231 | 99.00th=[35390], 99.50th=[41157], 99.90th=[42206], 99.95th=[47973], 00:42:41.231 | 99.99th=[47973] 00:42:41.231 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=1995.20, stdev=83.88, samples=20 00:42:41.231 iops : min= 480, max= 544, avg=498.80, stdev=20.97, samples=20 00:42:41.231 lat (msec) : 10=0.30%, 20=2.26%, 50=97.44% 00:42:41.231 cpu : usr=98.67%, sys=0.97%, ctx=27, majf=0, minf=55 00:42:41.231 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=3488018: Tue Nov 5 17:06:46 2024 00:42:41.231 read: IOPS=497, BW=1991KiB/s (2038kB/s)(19.5MiB/10023msec) 00:42:41.231 slat (nsec): min=5409, max=98954, avg=14386.14, stdev=12755.05 00:42:41.231 clat (usec): min=8918, max=50957, avg=32034.08, stdev=4057.48 00:42:41.231 lat (usec): min=8925, max=50963, avg=32048.47, stdev=4058.62 00:42:41.231 clat percentiles (usec): 00:42:41.231 | 1.00th=[18482], 5.00th=[22414], 10.00th=[27132], 20.00th=[32113], 00:42:41.231 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.231 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:42:41.231 | 99.00th=[44303], 99.50th=[46400], 99.90th=[51119], 99.95th=[51119], 00:42:41.231 | 99.99th=[51119] 00:42:41.231 bw ( KiB/s): min= 1792, max= 2400, per=4.19%, avg=1988.80, stdev=127.38, samples=20 00:42:41.231 iops : min= 448, max= 600, avg=497.20, stdev=31.84, samples=20 00:42:41.231 lat (msec) : 10=0.32%, 20=0.88%, 50=98.68%, 100=0.12% 00:42:41.231 cpu : usr=98.63%, sys=1.02%, ctx=14, majf=0, minf=19 00:42:41.231 IO depths : 1=4.9%, 2=10.1%, 4=22.2%, 8=55.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:42:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=3488020: Tue Nov 5 17:06:46 2024 00:42:41.231 read: IOPS=509, BW=2039KiB/s (2088kB/s)(19.9MiB/10007msec) 00:42:41.231 slat (nsec): min=5399, max=84474, avg=19861.44, stdev=14567.65 00:42:41.231 clat (usec): min=8747, max=55024, avg=31213.94, stdev=5633.59 00:42:41.231 lat (usec): min=8752, max=55033, avg=31233.80, stdev=5637.82 00:42:41.231 clat percentiles (usec): 00:42:41.231 | 1.00th=[16712], 5.00th=[20841], 10.00th=[22676], 20.00th=[27395], 00:42:41.231 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:42:41.231 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[38536], 00:42:41.231 | 99.00th=[49546], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:42:41.231 | 99.99th=[54789] 00:42:41.231 bw ( KiB/s): min= 1795, max= 2384, per=4.25%, avg=2015.32, stdev=145.24, samples=19 00:42:41.231 iops : min= 448, max= 596, avg=503.79, stdev=36.37, samples=19 00:42:41.231 lat (msec) : 10=0.12%, 20=2.14%, 50=96.96%, 100=0.78% 00:42:41.231 cpu : usr=98.93%, sys=0.73%, ctx=13, majf=0, minf=22 00:42:41.231 IO depths : 1=2.6%, 2=6.9%, 4=18.9%, 8=61.5%, 16=10.1%, 32=0.0%, >=64=0.0% 00:42:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 issued rwts: total=5102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=3488021: Tue Nov 5 17:06:46 2024 00:42:41.231 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.5MiB/10009msec) 00:42:41.231 slat (nsec): min=5388, max=80214, avg=16973.18, stdev=13767.99 00:42:41.231 clat (usec): min=8063, max=72334, avg=32076.24, stdev=4597.69 00:42:41.231 lat (usec): min=8069, max=72355, avg=32093.22, stdev=4598.76 00:42:41.231 clat percentiles (usec): 00:42:41.231 | 1.00th=[17957], 5.00th=[22676], 10.00th=[26870], 20.00th=[31851], 00:42:41.231 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.231 | 70.00th=[33162], 80.00th=[33424], 90.00th=[34341], 95.00th=[38011], 00:42:41.231 | 99.00th=[45876], 99.50th=[48497], 99.90th=[52691], 99.95th=[71828], 00:42:41.231 | 99.99th=[71828] 00:42:41.231 bw ( KiB/s): min= 1792, max= 2160, per=4.17%, avg=1974.74, stdev=78.58, samples=19 00:42:41.231 iops : min= 448, max= 540, avg=493.68, stdev=19.64, samples=19 00:42:41.231 lat (msec) : 10=0.20%, 20=1.18%, 50=98.25%, 100=0.36% 00:42:41.231 cpu : usr=99.02%, sys=0.63%, ctx=14, majf=0, minf=16 00:42:41.231 IO depths : 1=0.4%, 2=1.1%, 4=4.3%, 8=78.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:42:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 complete : 0=0.0%, 4=89.8%, 8=8.3%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.231 issued rwts: total=4980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=3488022: Tue Nov 5 17:06:46 2024 00:42:41.231 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10009msec) 00:42:41.231 slat (nsec): min=5448, max=97647, avg=19481.25, stdev=12641.20 00:42:41.231 clat (usec): min=20555, max=43440, avg=32651.97, stdev=1687.80 00:42:41.231 lat (usec): min=20564, max=43449, avg=32671.45, stdev=1688.12 00:42:41.231 clat percentiles (usec): 00:42:41.231 | 1.00th=[24249], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:42:41.232 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.232 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:42:41.232 | 99.00th=[40109], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:42:41.232 | 99.99th=[43254] 00:42:41.232 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1946.95, stdev=51.72, samples=19 00:42:41.232 iops : min= 480, max= 512, avg=486.74, stdev=12.93, samples=19 00:42:41.232 lat (msec) : 50=100.00% 00:42:41.232 cpu : usr=98.99%, sys=0.68%, ctx=14, majf=0, minf=18 00:42:41.232 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename0: (groupid=0, jobs=1): err= 0: pid=3488023: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=488, BW=1952KiB/s (1999kB/s)(19.1MiB/10007msec) 00:42:41.232 slat (nsec): min=5398, max=87634, avg=24399.40, stdev=13521.08 00:42:41.232 clat (usec): min=8883, max=77574, avg=32564.12, stdev=3478.53 00:42:41.232 lat (usec): min=8889, max=77593, avg=32588.52, stdev=3479.19 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[21627], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:42:41.232 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.232 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:42:41.232 | 99.00th=[42206], 99.50th=[50070], 99.90th=[77071], 99.95th=[77071], 00:42:41.232 | 99.99th=[77071] 00:42:41.232 bw ( KiB/s): min= 1763, max= 2048, per=4.10%, avg=1942.05, stdev=65.99, samples=19 00:42:41.232 iops : min= 440, max= 512, avg=485.47, stdev=16.61, samples=19 00:42:41.232 lat (msec) : 10=0.33%, 20=0.04%, 50=99.14%, 100=0.49% 00:42:41.232 cpu : usr=98.79%, sys=0.88%, ctx=14, majf=0, minf=21 00:42:41.232 IO depths : 1=5.4%, 2=11.1%, 4=23.2%, 8=53.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename0: (groupid=0, jobs=1): err= 0: pid=3488024: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10008msec) 00:42:41.232 slat (usec): min=6, max=109, avg=29.64, stdev=19.87 00:42:41.232 clat (usec): min=8875, max=53412, avg=31975.97, stdev=3972.92 00:42:41.232 lat (usec): min=8885, max=53422, avg=32005.61, stdev=3974.84 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[20317], 5.00th=[22938], 10.00th=[28705], 20.00th=[31851], 00:42:41.232 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:42:41.232 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:42:41.232 | 99.00th=[47973], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:42:41.232 | 99.99th=[53216] 00:42:41.232 bw ( KiB/s): min= 1795, max= 2144, per=4.17%, avg=1975.74, stdev=94.24, samples=19 00:42:41.232 iops : min= 448, max= 536, avg=493.89, stdev=23.64, samples=19 00:42:41.232 lat (msec) : 10=0.32%, 20=0.28%, 50=98.63%, 100=0.77% 00:42:41.232 cpu : usr=98.91%, sys=0.72%, ctx=67, majf=0, minf=19 00:42:41.232 IO depths : 1=4.8%, 2=10.0%, 4=22.0%, 8=55.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename0: (groupid=0, jobs=1): err= 0: pid=3488025: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10009msec) 00:42:41.232 slat (nsec): min=5435, max=90695, avg=20336.21, stdev=15995.50 00:42:41.232 clat (usec): min=14378, max=49525, avg=32350.95, stdev=2370.93 00:42:41.232 lat (usec): min=14387, max=49557, avg=32371.29, stdev=2371.05 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[21103], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:42:41.232 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.232 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:42:41.232 | 99.00th=[35390], 99.50th=[36963], 99.90th=[40633], 99.95th=[49546], 00:42:41.232 | 99.99th=[49546] 00:42:41.232 bw ( KiB/s): min= 1920, max= 2224, per=4.15%, avg=1965.47, stdev=81.98, samples=19 00:42:41.232 iops : min= 480, max= 556, avg=491.37, stdev=20.49, samples=19 00:42:41.232 lat (msec) : 20=0.97%, 50=99.03% 00:42:41.232 cpu : usr=98.75%, sys=0.94%, ctx=17, majf=0, minf=19 00:42:41.232 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=3488026: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:42:41.232 slat (nsec): min=4999, max=90208, avg=22596.86, stdev=16997.96 00:42:41.232 clat (usec): min=17556, max=42660, avg=32604.12, stdev=1259.07 00:42:41.232 lat (usec): min=17563, max=42674, avg=32626.72, stdev=1258.31 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:42:41.232 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.232 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:42:41.232 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[42730], 00:42:41.232 | 99.99th=[42730] 00:42:41.232 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1946.95, stdev=53.61, samples=19 00:42:41.232 iops : min= 480, max= 512, avg=486.74, stdev=13.40, samples=19 00:42:41.232 lat (msec) : 20=0.39%, 50=99.61% 00:42:41.232 cpu : usr=99.08%, sys=0.60%, ctx=16, majf=0, minf=17 00:42:41.232 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=3488027: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10008msec) 00:42:41.232 slat (nsec): min=5579, max=88927, avg=17378.31, stdev=12691.07 00:42:41.232 clat (usec): min=14714, max=50857, avg=32459.83, stdev=2250.41 00:42:41.232 lat (usec): min=14732, max=50869, avg=32477.21, stdev=2250.23 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[19792], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:42:41.232 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.232 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:42:41.232 | 99.00th=[35390], 99.50th=[35914], 99.90th=[50594], 99.95th=[50594], 00:42:41.232 | 99.99th=[51119] 00:42:41.232 bw ( KiB/s): min= 1920, max= 2144, per=4.13%, avg=1958.74, stdev=69.60, samples=19 00:42:41.232 iops : min= 480, max= 536, avg=489.68, stdev=17.40, samples=19 00:42:41.232 lat (msec) : 20=1.10%, 50=98.74%, 100=0.16% 00:42:41.232 cpu : usr=98.89%, sys=0.79%, ctx=18, majf=0, minf=19 00:42:41.232 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=3488028: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10008msec) 00:42:41.232 slat (nsec): min=5392, max=70712, avg=17043.42, stdev=11630.67 00:42:41.232 clat (usec): min=8853, max=83895, avg=32192.89, stdev=5567.09 00:42:41.232 lat (usec): min=8860, max=83914, avg=32209.93, stdev=5567.81 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[16581], 5.00th=[21365], 10.00th=[25822], 20.00th=[31851], 00:42:41.232 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.232 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[39584], 00:42:41.232 | 99.00th=[49546], 99.50th=[54264], 99.90th=[66323], 99.95th=[83362], 00:42:41.232 | 99.99th=[84411] 00:42:41.232 bw ( KiB/s): min= 1788, max= 2192, per=4.17%, avg=1978.74, stdev=97.65, samples=19 00:42:41.232 iops : min= 447, max= 548, avg=494.68, stdev=24.41, samples=19 00:42:41.232 lat (msec) : 10=0.32%, 20=3.63%, 50=95.12%, 100=0.93% 00:42:41.232 cpu : usr=98.89%, sys=0.77%, ctx=13, majf=0, minf=17 00:42:41.232 IO depths : 1=2.3%, 2=5.0%, 4=14.7%, 8=66.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=91.9%, 8=3.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=3488030: Tue Nov 5 17:06:46 2024 00:42:41.232 read: IOPS=485, BW=1940KiB/s (1987kB/s)(19.0MiB/10009msec) 00:42:41.232 slat (nsec): min=5403, max=82294, avg=20726.26, stdev=13190.72 00:42:41.232 clat (usec): min=9864, max=67064, avg=32799.54, stdev=2471.84 00:42:41.232 lat (usec): min=9869, max=67079, avg=32820.27, stdev=2471.54 00:42:41.232 clat percentiles (usec): 00:42:41.232 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:42:41.232 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.232 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:42:41.232 | 99.00th=[41681], 99.50th=[49546], 99.90th=[54264], 99.95th=[66847], 00:42:41.232 | 99.99th=[66847] 00:42:41.232 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1933.63, stdev=72.27, samples=19 00:42:41.232 iops : min= 448, max= 512, avg=483.37, stdev=18.15, samples=19 00:42:41.232 lat (msec) : 10=0.14%, 20=0.27%, 50=99.26%, 100=0.33% 00:42:41.232 cpu : usr=98.80%, sys=0.88%, ctx=19, majf=0, minf=26 00:42:41.232 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.232 issued rwts: total=4855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename1: (groupid=0, jobs=1): err= 0: pid=3488031: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10012msec) 00:42:41.233 slat (nsec): min=5466, max=91768, avg=18967.33, stdev=13730.57 00:42:41.233 clat (usec): min=19108, max=40679, avg=32663.97, stdev=1247.06 00:42:41.233 lat (usec): min=19118, max=40694, avg=32682.94, stdev=1245.54 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:42:41.233 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.233 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:42:41.233 | 99.00th=[35390], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:42:41.233 | 99.99th=[40633] 00:42:41.233 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1946.95, stdev=53.61, samples=19 00:42:41.233 iops : min= 480, max= 512, avg=486.74, stdev=13.40, samples=19 00:42:41.233 lat (msec) : 20=0.33%, 50=99.67% 00:42:41.233 cpu : usr=98.77%, sys=0.90%, ctx=15, majf=0, minf=23 00:42:41.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename1: (groupid=0, jobs=1): err= 0: pid=3488032: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=497, BW=1990KiB/s (2037kB/s)(19.5MiB/10028msec) 00:42:41.233 slat (nsec): min=5428, max=92016, avg=14486.03, stdev=11434.97 00:42:41.233 clat (usec): min=9228, max=44554, avg=32048.41, stdev=3461.88 00:42:41.233 lat (usec): min=9235, max=44582, avg=32062.89, stdev=3461.96 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[15926], 5.00th=[23725], 10.00th=[31589], 20.00th=[32113], 00:42:41.233 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.233 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:42:41.233 | 99.00th=[37487], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:42:41.233 | 99.99th=[44303] 00:42:41.233 bw ( KiB/s): min= 1792, max= 2528, per=4.19%, avg=1988.80, stdev=146.19, samples=20 00:42:41.233 iops : min= 448, max= 632, avg=497.20, stdev=36.55, samples=20 00:42:41.233 lat (msec) : 10=0.32%, 20=2.21%, 50=97.47% 00:42:41.233 cpu : usr=98.69%, sys=0.97%, ctx=12, majf=0, minf=30 00:42:41.233 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename1: (groupid=0, jobs=1): err= 0: pid=3488033: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:42:41.233 slat (nsec): min=5441, max=87470, avg=16447.24, stdev=11757.59 00:42:41.233 clat (usec): min=13967, max=46373, avg=32565.35, stdev=2298.88 00:42:41.233 lat (usec): min=13991, max=46381, avg=32581.79, stdev=2298.59 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:42:41.233 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.233 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:42:41.233 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:42:41.233 | 99.99th=[46400] 00:42:41.233 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=1953.68, stdev=71.93, samples=19 00:42:41.233 iops : min= 480, max= 544, avg=488.42, stdev=17.98, samples=19 00:42:41.233 lat (msec) : 20=0.65%, 50=99.35% 00:42:41.233 cpu : usr=99.03%, sys=0.65%, ctx=14, majf=0, minf=27 00:42:41.233 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename1: (groupid=0, jobs=1): err= 0: pid=3488034: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.3MiB/10008msec) 00:42:41.233 slat (nsec): min=5401, max=73173, avg=15791.56, stdev=11318.26 00:42:41.233 clat (usec): min=7886, max=53442, avg=32380.55, stdev=4724.08 00:42:41.233 lat (usec): min=7892, max=53461, avg=32396.34, stdev=4724.94 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[17433], 5.00th=[23725], 10.00th=[27657], 20.00th=[31851], 00:42:41.233 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.233 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[40109], 00:42:41.233 | 99.00th=[49546], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:42:41.233 | 99.99th=[53216] 00:42:41.233 bw ( KiB/s): min= 1779, max= 2112, per=4.15%, avg=1965.63, stdev=74.92, samples=19 00:42:41.233 iops : min= 444, max= 528, avg=491.37, stdev=18.83, samples=19 00:42:41.233 lat (msec) : 10=0.12%, 20=1.30%, 50=97.75%, 100=0.83% 00:42:41.233 cpu : usr=98.89%, sys=0.78%, ctx=14, majf=0, minf=18 00:42:41.233 IO depths : 1=0.9%, 2=3.1%, 4=10.9%, 8=71.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=91.1%, 8=5.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=4930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=3488035: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10027msec) 00:42:41.233 slat (nsec): min=5402, max=79458, avg=11219.79, stdev=8418.89 00:42:41.233 clat (usec): min=9043, max=53133, avg=31382.89, stdev=4924.28 00:42:41.233 lat (usec): min=9062, max=53141, avg=31394.11, stdev=4924.74 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[14222], 5.00th=[20841], 10.00th=[23725], 20.00th=[31851], 00:42:41.233 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:42:41.233 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:42:41.233 | 99.00th=[45351], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:42:41.233 | 99.99th=[53216] 00:42:41.233 bw ( KiB/s): min= 1792, max= 2400, per=4.29%, avg=2032.80, stdev=149.15, samples=20 00:42:41.233 iops : min= 448, max= 600, avg=508.20, stdev=37.29, samples=20 00:42:41.233 lat (msec) : 10=0.31%, 20=3.90%, 50=95.43%, 100=0.35% 00:42:41.233 cpu : usr=98.79%, sys=0.88%, ctx=14, majf=0, minf=28 00:42:41.233 IO depths : 1=4.3%, 2=9.2%, 4=21.5%, 8=56.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=5098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=3488036: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.3MiB/10001msec) 00:42:41.233 slat (nsec): min=5413, max=82406, avg=16399.67, stdev=13449.07 00:42:41.233 clat (usec): min=14793, max=50886, avg=32199.72, stdev=3252.95 00:42:41.233 lat (usec): min=14799, max=50892, avg=32216.12, stdev=3254.49 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[20579], 5.00th=[24249], 10.00th=[31589], 20.00th=[32113], 00:42:41.233 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.233 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:42:41.233 | 99.00th=[42206], 99.50th=[46400], 99.90th=[50594], 99.95th=[51119], 00:42:41.233 | 99.99th=[51119] 00:42:41.233 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=1983.16, stdev=77.37, samples=19 00:42:41.233 iops : min= 480, max= 544, avg=495.79, stdev=19.34, samples=19 00:42:41.233 lat (msec) : 20=0.73%, 50=99.11%, 100=0.16% 00:42:41.233 cpu : usr=98.82%, sys=0.85%, ctx=13, majf=0, minf=22 00:42:41.233 IO depths : 1=4.3%, 2=9.6%, 4=22.4%, 8=55.5%, 16=8.2%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=3488037: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10007msec) 00:42:41.233 slat (nsec): min=5408, max=79907, avg=14905.71, stdev=11135.88 00:42:41.233 clat (usec): min=8736, max=53490, avg=31890.26, stdev=4184.84 00:42:41.233 lat (usec): min=8743, max=53505, avg=31905.17, stdev=4185.81 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[19530], 5.00th=[21890], 10.00th=[25822], 20.00th=[31851], 00:42:41.233 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:42:41.233 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:42:41.233 | 99.00th=[44827], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:42:41.233 | 99.99th=[53740] 00:42:41.233 bw ( KiB/s): min= 1840, max= 2208, per=4.22%, avg=2000.00, stdev=108.39, samples=19 00:42:41.233 iops : min= 460, max= 552, avg=500.00, stdev=27.10, samples=19 00:42:41.233 lat (msec) : 10=0.12%, 20=1.42%, 50=98.14%, 100=0.32% 00:42:41.233 cpu : usr=98.84%, sys=0.83%, ctx=12, majf=0, minf=26 00:42:41.233 IO depths : 1=1.7%, 2=3.4%, 4=8.1%, 8=72.8%, 16=14.1%, 32=0.0%, >=64=0.0% 00:42:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 complete : 0=0.0%, 4=90.5%, 8=6.9%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.233 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=3488038: Tue Nov 5 17:06:46 2024 00:42:41.233 read: IOPS=506, BW=2026KiB/s (2074kB/s)(19.8MiB/10019msec) 00:42:41.233 slat (usec): min=5, max=113, avg=22.88, stdev=18.34 00:42:41.233 clat (usec): min=14402, max=54882, avg=31389.97, stdev=4270.81 00:42:41.233 lat (usec): min=14412, max=54896, avg=31412.85, stdev=4274.03 00:42:41.233 clat percentiles (usec): 00:42:41.233 | 1.00th=[19006], 5.00th=[21627], 10.00th=[23987], 20.00th=[31851], 00:42:41.233 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:42:41.233 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:42:41.233 | 99.00th=[43779], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:42:41.233 | 99.99th=[54789] 00:42:41.233 bw ( KiB/s): min= 1920, max= 2480, per=4.27%, avg=2023.20, stdev=141.45, samples=20 00:42:41.234 iops : min= 480, max= 620, avg=505.80, stdev=35.36, samples=20 00:42:41.234 lat (msec) : 20=1.38%, 50=98.07%, 100=0.55% 00:42:41.234 cpu : usr=98.15%, sys=1.23%, ctx=455, majf=0, minf=39 00:42:41.234 IO depths : 1=5.0%, 2=10.2%, 4=21.9%, 8=55.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:42:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 issued rwts: total=5074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.234 filename2: (groupid=0, jobs=1): err= 0: pid=3488040: Tue Nov 5 17:06:46 2024 00:42:41.234 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10007msec) 00:42:41.234 slat (usec): min=5, max=137, avg=25.41, stdev=18.07 00:42:41.234 clat (usec): min=14328, max=57494, avg=32305.27, stdev=2876.55 00:42:41.234 lat (usec): min=14336, max=57501, avg=32330.68, stdev=2878.29 00:42:41.234 clat percentiles (usec): 00:42:41.234 | 1.00th=[21890], 5.00th=[27657], 10.00th=[31851], 20.00th=[32113], 00:42:41.234 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:42:41.234 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:41.234 | 99.00th=[42206], 99.50th=[44827], 99.90th=[56361], 99.95th=[56361], 00:42:41.234 | 99.99th=[57410] 00:42:41.234 bw ( KiB/s): min= 1888, max= 2176, per=4.14%, avg=1963.79, stdev=84.81, samples=19 00:42:41.234 iops : min= 472, max= 544, avg=490.95, stdev=21.20, samples=19 00:42:41.234 lat (msec) : 20=0.12%, 50=99.43%, 100=0.45% 00:42:41.234 cpu : usr=98.85%, sys=0.79%, ctx=43, majf=0, minf=22 00:42:41.234 IO depths : 1=5.3%, 2=11.1%, 4=23.5%, 8=52.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:42:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.234 filename2: (groupid=0, jobs=1): err= 0: pid=3488041: Tue Nov 5 17:06:46 2024 00:42:41.234 read: IOPS=486, BW=1948KiB/s (1994kB/s)(19.0MiB/10010msec) 00:42:41.234 slat (usec): min=5, max=111, avg=28.51, stdev=17.49 00:42:41.234 clat (usec): min=9625, max=54913, avg=32581.77, stdev=2444.17 00:42:41.234 lat (usec): min=9634, max=54927, avg=32610.28, stdev=2444.16 00:42:41.234 clat percentiles (usec): 00:42:41.234 | 1.00th=[22414], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:42:41.234 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:42:41.234 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:41.234 | 99.00th=[35914], 99.50th=[46924], 99.90th=[54789], 99.95th=[54789], 00:42:41.234 | 99.99th=[54789] 00:42:41.234 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1940.21, stdev=64.19, samples=19 00:42:41.234 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:42:41.234 lat (msec) : 10=0.21%, 20=0.27%, 50=99.08%, 100=0.45% 00:42:41.234 cpu : usr=98.96%, sys=0.68%, ctx=36, majf=0, minf=21 00:42:41.234 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:42:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 issued rwts: total=4874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.234 filename2: (groupid=0, jobs=1): err= 0: pid=3488042: Tue Nov 5 17:06:46 2024 00:42:41.234 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10019msec) 00:42:41.234 slat (usec): min=4, max=103, avg=24.18, stdev=16.50 00:42:41.234 clat (usec): min=18193, max=35600, avg=32500.65, stdev=1456.23 00:42:41.234 lat (usec): min=18197, max=35607, avg=32524.83, stdev=1456.58 00:42:41.234 clat percentiles (usec): 00:42:41.234 | 1.00th=[24511], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:42:41.234 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.234 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:41.234 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:42:41.234 | 99.99th=[35390] 00:42:41.234 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1953.68, stdev=57.91, samples=19 00:42:41.234 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:42:41.234 lat (msec) : 20=0.57%, 50=99.43% 00:42:41.234 cpu : usr=98.83%, sys=0.84%, ctx=36, majf=0, minf=20 00:42:41.234 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.234 filename2: (groupid=0, jobs=1): err= 0: pid=3488043: Tue Nov 5 17:06:46 2024 00:42:41.234 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10018msec) 00:42:41.234 slat (nsec): min=5397, max=82402, avg=15130.35, stdev=11892.77 00:42:41.234 clat (usec): min=15421, max=52482, avg=32052.99, stdev=3533.89 00:42:41.234 lat (usec): min=15427, max=52501, avg=32068.13, stdev=3534.89 00:42:41.234 clat percentiles (usec): 00:42:41.234 | 1.00th=[19006], 5.00th=[24249], 10.00th=[29492], 20.00th=[32113], 00:42:41.234 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:42:41.234 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:42:41.234 | 99.00th=[41681], 99.50th=[43779], 99.90th=[52167], 99.95th=[52691], 00:42:41.234 | 99.99th=[52691] 00:42:41.234 bw ( KiB/s): min= 1920, max= 2272, per=4.19%, avg=1986.40, stdev=98.12, samples=20 00:42:41.234 iops : min= 480, max= 568, avg=496.60, stdev=24.53, samples=20 00:42:41.234 lat (msec) : 20=1.28%, 50=98.59%, 100=0.12% 00:42:41.234 cpu : usr=98.90%, sys=0.77%, ctx=17, majf=0, minf=22 00:42:41.234 IO depths : 1=4.6%, 2=9.7%, 4=22.0%, 8=55.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:42:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:41.234 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:41.234 00:42:41.234 Run status group 0 (all jobs): 00:42:41.234 READ: bw=46.3MiB/s (48.5MB/s), 1940KiB/s-2039KiB/s (1987kB/s-2088kB/s), io=464MiB (487MB), run=10001-10028msec 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.234 bdev_null0 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.234 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 [2024-11-05 17:06:46.938248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 bdev_null1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:41.235 { 00:42:41.235 "params": { 00:42:41.235 "name": "Nvme$subsystem", 00:42:41.235 "trtype": "$TEST_TRANSPORT", 00:42:41.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:41.235 "adrfam": "ipv4", 00:42:41.235 "trsvcid": "$NVMF_PORT", 00:42:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:41.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:41.235 "hdgst": ${hdgst:-false}, 00:42:41.235 "ddgst": ${ddgst:-false} 00:42:41.235 }, 00:42:41.235 "method": "bdev_nvme_attach_controller" 00:42:41.235 } 00:42:41.235 EOF 00:42:41.235 )") 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:41.235 17:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:41.235 { 00:42:41.235 "params": { 00:42:41.235 "name": "Nvme$subsystem", 00:42:41.235 "trtype": "$TEST_TRANSPORT", 00:42:41.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:41.235 "adrfam": "ipv4", 00:42:41.235 "trsvcid": "$NVMF_PORT", 00:42:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:41.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:41.235 "hdgst": ${hdgst:-false}, 00:42:41.235 "ddgst": ${ddgst:-false} 00:42:41.235 }, 00:42:41.235 "method": "bdev_nvme_attach_controller" 00:42:41.235 } 00:42:41.235 EOF 00:42:41.235 )") 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:41.235 "params": { 00:42:41.235 "name": "Nvme0", 00:42:41.235 "trtype": "tcp", 00:42:41.235 "traddr": "10.0.0.2", 00:42:41.235 "adrfam": "ipv4", 00:42:41.235 "trsvcid": "4420", 00:42:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:41.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:41.235 "hdgst": false, 00:42:41.235 "ddgst": false 00:42:41.235 }, 00:42:41.235 "method": "bdev_nvme_attach_controller" 00:42:41.235 },{ 00:42:41.235 "params": { 00:42:41.235 "name": "Nvme1", 00:42:41.235 "trtype": "tcp", 00:42:41.235 "traddr": "10.0.0.2", 00:42:41.235 "adrfam": "ipv4", 00:42:41.235 "trsvcid": "4420", 00:42:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:41.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:41.235 "hdgst": false, 00:42:41.235 "ddgst": false 00:42:41.235 }, 00:42:41.235 "method": "bdev_nvme_attach_controller" 00:42:41.235 }' 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:41.235 17:06:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:41.235 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:41.235 ... 00:42:41.235 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:41.235 ... 00:42:41.235 fio-3.35 00:42:41.235 Starting 4 threads 00:42:46.741 00:42:46.741 filename0: (groupid=0, jobs=1): err= 0: pid=3490235: Tue Nov 5 17:06:53 2024 00:42:46.741 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5003msec) 00:42:46.741 slat (nsec): min=5391, max=56190, avg=7943.71, stdev=2939.45 00:42:46.741 clat (usec): min=1212, max=6326, avg=3771.78, stdev=337.94 00:42:46.742 lat (usec): min=1230, max=6331, avg=3779.72, stdev=338.04 00:42:46.742 clat percentiles (usec): 00:42:46.742 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3458], 20.00th=[ 3589], 00:42:46.742 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3785], 60.00th=[ 3818], 00:42:46.742 | 70.00th=[ 3818], 80.00th=[ 3818], 90.00th=[ 4080], 95.00th=[ 4178], 00:42:46.742 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5800], 99.95th=[ 5997], 00:42:46.742 | 99.99th=[ 6325] 00:42:46.742 bw ( KiB/s): min=16704, max=17344, per=25.26%, avg=16872.00, stdev=204.66, samples=10 00:42:46.742 iops : min= 2088, max= 2168, avg=2109.00, stdev=25.58, samples=10 00:42:46.742 lat (msec) : 2=0.02%, 4=88.85%, 10=11.13% 00:42:46.742 cpu : usr=96.78%, sys=2.96%, ctx=8, majf=0, minf=9 00:42:46.742 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 issued rwts: total=10550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:46.742 filename0: (groupid=0, jobs=1): err= 0: pid=3490236: Tue Nov 5 17:06:53 2024 00:42:46.742 read: IOPS=2084, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5002msec) 00:42:46.742 slat (nsec): min=5407, max=58653, avg=7785.08, stdev=2512.62 00:42:46.742 clat (usec): min=1557, max=6610, avg=3816.12, stdev=356.58 00:42:46.742 lat (usec): min=1563, max=6615, avg=3823.90, stdev=356.58 00:42:46.742 clat percentiles (usec): 00:42:46.742 | 1.00th=[ 3097], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3621], 00:42:46.742 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3785], 60.00th=[ 3818], 00:42:46.742 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 4178], 00:42:46.742 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 6063], 99.95th=[ 6259], 00:42:46.742 | 99.99th=[ 6587] 00:42:46.742 bw ( KiB/s): min=16432, max=16928, per=24.98%, avg=16682.67, stdev=157.58, samples=9 00:42:46.742 iops : min= 2054, max= 2116, avg=2085.33, stdev=19.70, samples=9 00:42:46.742 lat (msec) : 2=0.05%, 4=86.91%, 10=13.05% 00:42:46.742 cpu : usr=97.04%, sys=2.70%, ctx=5, majf=0, minf=9 00:42:46.742 IO depths : 1=0.1%, 2=0.1%, 4=74.2%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 issued rwts: total=10425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:46.742 filename1: (groupid=0, jobs=1): err= 0: pid=3490237: Tue Nov 5 17:06:53 2024 00:42:46.742 read: IOPS=2103, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5002msec) 00:42:46.742 slat (nsec): min=5393, max=57413, avg=5841.20, stdev=1289.25 00:42:46.742 clat (usec): min=1683, max=6349, avg=3789.15, stdev=459.45 00:42:46.742 lat (usec): min=1689, max=6354, avg=3794.99, stdev=459.47 00:42:46.742 clat percentiles (usec): 00:42:46.742 | 1.00th=[ 2868], 5.00th=[ 3130], 10.00th=[ 3359], 20.00th=[ 3556], 00:42:46.742 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3818], 00:42:46.742 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4080], 95.00th=[ 4817], 00:42:46.742 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6063], 00:42:46.742 | 99.99th=[ 6325] 00:42:46.742 bw ( KiB/s): min=16592, max=17216, per=25.16%, avg=16803.56, stdev=231.14, samples=9 00:42:46.742 iops : min= 2074, max= 2152, avg=2100.44, stdev=28.89, samples=9 00:42:46.742 lat (msec) : 2=0.02%, 4=88.80%, 10=11.18% 00:42:46.742 cpu : usr=96.56%, sys=3.22%, ctx=5, majf=0, minf=9 00:42:46.742 IO depths : 1=0.1%, 2=0.2%, 4=67.4%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 issued rwts: total=10521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:46.742 filename1: (groupid=0, jobs=1): err= 0: pid=3490238: Tue Nov 5 17:06:53 2024 00:42:46.742 read: IOPS=2053, BW=16.0MiB/s (16.8MB/s)(80.3MiB/5002msec) 00:42:46.742 slat (nsec): min=5392, max=59764, avg=7818.28, stdev=3206.84 00:42:46.742 clat (usec): min=2576, max=7659, avg=3876.26, stdev=417.55 00:42:46.742 lat (usec): min=2582, max=7685, avg=3884.07, stdev=417.26 00:42:46.742 clat percentiles (usec): 00:42:46.742 | 1.00th=[ 3326], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3720], 00:42:46.742 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3818], 00:42:46.742 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4178], 95.00th=[ 4424], 00:42:46.742 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6587], 00:42:46.742 | 99.99th=[ 7570] 00:42:46.742 bw ( KiB/s): min=15760, max=16768, per=24.60%, avg=16432.00, stdev=372.34, samples=10 00:42:46.742 iops : min= 1970, max= 2096, avg=2054.00, stdev=46.54, samples=10 00:42:46.742 lat (msec) : 4=84.61%, 10=15.39% 00:42:46.742 cpu : usr=96.84%, sys=2.92%, ctx=5, majf=0, minf=9 00:42:46.742 IO depths : 1=0.1%, 2=0.1%, 4=68.4%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.742 issued rwts: total=10273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:46.742 00:42:46.742 Run status group 0 (all jobs): 00:42:46.742 READ: bw=65.2MiB/s (68.4MB/s), 16.0MiB/s-16.5MiB/s (16.8MB/s-17.3MB/s), io=326MiB (342MB), run=5002-5003msec 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.742 00:42:46.742 real 0m24.616s 00:42:46.742 user 5m23.909s 00:42:46.742 sys 0m4.359s 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 ************************************ 00:42:46.742 END TEST fio_dif_rand_params 00:42:46.742 ************************************ 00:42:46.742 17:06:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:46.742 17:06:53 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:46.742 17:06:53 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 ************************************ 00:42:46.742 START TEST fio_dif_digest 00:42:46.742 ************************************ 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 bdev_null0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:46.742 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.743 [2024-11-05 17:06:53.478289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:46.743 { 00:42:46.743 "params": { 00:42:46.743 "name": "Nvme$subsystem", 00:42:46.743 "trtype": "$TEST_TRANSPORT", 00:42:46.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:46.743 "adrfam": "ipv4", 00:42:46.743 "trsvcid": "$NVMF_PORT", 00:42:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:46.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:46.743 "hdgst": ${hdgst:-false}, 00:42:46.743 "ddgst": ${ddgst:-false} 00:42:46.743 }, 00:42:46.743 "method": "bdev_nvme_attach_controller" 00:42:46.743 } 00:42:46.743 EOF 00:42:46.743 )") 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:46.743 "params": { 00:42:46.743 "name": "Nvme0", 00:42:46.743 "trtype": "tcp", 00:42:46.743 "traddr": "10.0.0.2", 00:42:46.743 "adrfam": "ipv4", 00:42:46.743 "trsvcid": "4420", 00:42:46.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:46.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:46.743 "hdgst": true, 00:42:46.743 "ddgst": true 00:42:46.743 }, 00:42:46.743 "method": "bdev_nvme_attach_controller" 00:42:46.743 }' 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:46.743 17:06:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.011 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:47.011 ... 00:42:47.011 fio-3.35 00:42:47.011 Starting 3 threads 00:42:59.246 00:42:59.246 filename0: (groupid=0, jobs=1): err= 0: pid=3491755: Tue Nov 5 17:07:04 2024 00:42:59.246 read: IOPS=270, BW=33.9MiB/s (35.5MB/s)(340MiB/10045msec) 00:42:59.246 slat (nsec): min=5653, max=30714, avg=6629.94, stdev=1094.45 00:42:59.246 clat (usec): min=5047, max=50381, avg=11050.57, stdev=2109.16 00:42:59.246 lat (usec): min=5053, max=50387, avg=11057.20, stdev=2109.24 00:42:59.246 clat percentiles (usec): 00:42:59.246 | 1.00th=[ 6390], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8979], 00:42:59.246 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11600], 60.00th=[11994], 00:42:59.246 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13173], 95.00th=[13435], 00:42:59.246 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15533], 99.95th=[46400], 00:42:59.246 | 99.99th=[50594] 00:42:59.246 bw ( KiB/s): min=30976, max=37632, per=41.93%, avg=34803.20, stdev=1954.01, samples=20 00:42:59.246 iops : min= 242, max= 294, avg=271.90, stdev=15.27, samples=20 00:42:59.246 lat (msec) : 10=31.17%, 20=68.76%, 50=0.04%, 100=0.04% 00:42:59.246 cpu : usr=92.80%, sys=6.23%, ctx=480, majf=0, minf=136 00:42:59.246 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:59.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.246 issued rwts: total=2721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:59.246 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:59.246 filename0: (groupid=0, jobs=1): err= 0: pid=3491756: Tue Nov 5 17:07:04 2024 00:42:59.246 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10046msec) 00:42:59.246 slat (nsec): min=5633, max=31242, avg=6541.29, stdev=1111.24 00:42:59.246 clat (usec): min=8743, max=97945, avg=16390.97, stdev=10261.76 00:42:59.246 lat (usec): min=8749, max=97951, avg=16397.51, stdev=10261.75 00:42:59.246 clat percentiles (usec): 00:42:59.246 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10945], 20.00th=[12649], 00:42:59.246 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:42:59.246 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16450], 95.00th=[53216], 00:42:59.246 | 99.00th=[56361], 99.50th=[56886], 99.90th=[96994], 99.95th=[98042], 00:42:59.246 | 99.99th=[98042] 00:42:59.246 bw ( KiB/s): min=17920, max=28160, per=28.27%, avg=23465.00, stdev=3124.76, samples=20 00:42:59.246 iops : min= 140, max= 220, avg=183.30, stdev=24.39, samples=20 00:42:59.246 lat (msec) : 10=3.00%, 20=91.01%, 50=0.11%, 100=5.89% 00:42:59.246 cpu : usr=95.34%, sys=4.45%, ctx=17, majf=0, minf=145 00:42:59.246 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:59.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.246 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:59.246 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:59.246 filename0: (groupid=0, jobs=1): err= 0: pid=3491757: Tue Nov 5 17:07:04 2024 00:42:59.246 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(245MiB/10008msec) 00:42:59.246 slat (nsec): min=5729, max=49065, avg=6869.07, stdev=1607.06 00:42:59.246 clat (usec): min=8013, max=96344, avg=15315.58, stdev=9377.76 00:42:59.246 lat (usec): min=8019, max=96354, avg=15322.44, stdev=9377.78 00:42:59.246 clat percentiles (usec): 00:42:59.246 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10552], 20.00th=[12125], 00:42:59.246 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:42:59.246 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15795], 95.00th=[16712], 00:42:59.246 | 99.00th=[56361], 99.50th=[56886], 99.90th=[95945], 99.95th=[95945], 00:42:59.246 | 99.99th=[95945] 00:42:59.246 bw ( KiB/s): min=19968, max=30720, per=30.42%, avg=25249.68, stdev=2645.62, samples=19 00:42:59.246 iops : min= 156, max= 240, avg=197.26, stdev=20.67, samples=19 00:42:59.246 lat (msec) : 10=4.54%, 20=91.17%, 50=0.05%, 100=4.24% 00:42:59.246 cpu : usr=93.56%, sys=5.36%, ctx=748, majf=0, minf=83 00:42:59.246 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:59.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:59.246 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:59.246 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:59.246 00:42:59.246 Run status group 0 (all jobs): 00:42:59.246 READ: bw=81.1MiB/s (85.0MB/s), 22.8MiB/s-33.9MiB/s (23.9MB/s-35.5MB/s), io=814MiB (854MB), run=10008-10046msec 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:59.246 00:42:59.246 real 0m11.324s 00:42:59.246 user 0m42.925s 00:42:59.246 sys 0m1.953s 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:59.246 17:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:59.246 ************************************ 00:42:59.246 END TEST fio_dif_digest 00:42:59.246 ************************************ 00:42:59.246 17:07:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:59.246 17:07:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:59.246 rmmod nvme_tcp 00:42:59.246 rmmod nvme_fabrics 00:42:59.246 rmmod nvme_keyring 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 3480797 ']' 00:42:59.246 17:07:04 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 3480797 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3480797 ']' 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3480797 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3480797 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3480797' 00:42:59.246 killing process with pid 3480797 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3480797 00:42:59.246 17:07:04 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3480797 00:42:59.246 17:07:05 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:42:59.246 17:07:05 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:01.797 Waiting for block devices as requested 00:43:01.797 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:01.797 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:01.797 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:01.797 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:01.797 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:01.797 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:02.058 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:02.058 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:02.058 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:02.318 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:02.318 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:02.579 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:02.579 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:02.579 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:02.579 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:02.838 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:02.838 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:03.099 17:07:10 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:43:03.099 17:07:10 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:43:03.099 17:07:10 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:43:03.099 17:07:10 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:03.099 17:07:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:03.099 17:07:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:05.641 17:07:12 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:43:05.642 17:07:12 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:43:05.642 17:07:12 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:43:05.642 17:07:12 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:43:05.642 17:07:12 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:43:05.642 17:07:12 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:43:05.642 17:07:12 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:43:05.642 17:07:12 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:43:05.642 17:07:12 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:43:05.642 00:43:05.642 real 1m18.433s 00:43:05.642 user 8m8.693s 00:43:05.642 sys 0m21.618s 00:43:05.642 17:07:12 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:05.642 17:07:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:05.642 ************************************ 00:43:05.642 END TEST nvmf_dif 00:43:05.642 ************************************ 00:43:05.642 17:07:12 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:05.642 17:07:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:05.642 17:07:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:05.642 17:07:12 -- common/autotest_common.sh@10 -- # set +x 00:43:05.642 ************************************ 00:43:05.642 START TEST nvmf_abort_qd_sizes 00:43:05.642 ************************************ 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:05.642 * Looking for test storage... 00:43:05.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.642 --rc genhtml_branch_coverage=1 00:43:05.642 --rc genhtml_function_coverage=1 00:43:05.642 --rc genhtml_legend=1 00:43:05.642 --rc geninfo_all_blocks=1 00:43:05.642 --rc geninfo_unexecuted_blocks=1 00:43:05.642 00:43:05.642 ' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.642 --rc genhtml_branch_coverage=1 00:43:05.642 --rc genhtml_function_coverage=1 00:43:05.642 --rc genhtml_legend=1 00:43:05.642 --rc geninfo_all_blocks=1 00:43:05.642 --rc geninfo_unexecuted_blocks=1 00:43:05.642 00:43:05.642 ' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.642 --rc genhtml_branch_coverage=1 00:43:05.642 --rc genhtml_function_coverage=1 00:43:05.642 --rc genhtml_legend=1 00:43:05.642 --rc geninfo_all_blocks=1 00:43:05.642 --rc geninfo_unexecuted_blocks=1 00:43:05.642 00:43:05.642 ' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.642 --rc genhtml_branch_coverage=1 00:43:05.642 --rc genhtml_function_coverage=1 00:43:05.642 --rc genhtml_legend=1 00:43:05.642 --rc geninfo_all_blocks=1 00:43:05.642 --rc geninfo_unexecuted_blocks=1 00:43:05.642 00:43:05.642 ' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.642 17:07:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:05.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:43:05.643 17:07:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:12.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:12.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:12.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:12.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@247 -- # create_target_ns 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:43:12.223 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:43:12.224 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:43:12.224 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:43:12.224 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:43:12.224 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:43:12.224 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:43:12.224 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:43:12.484 10.0.0.1 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:43:12.484 10.0.0.2 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:43:12.484 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:43:12.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:12.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.499 ms 00:43:12.485 00:43:12.485 --- 10.0.0.1 ping statistics --- 00:43:12.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:12.485 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:43:12.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:12.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:43:12.485 00:43:12.485 --- 10.0.0.2 ping statistics --- 00:43:12.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:12.485 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:43:12.485 17:07:19 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:16.696 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:16.696 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:43:16.696 ' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=3501172 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 3501172 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3501172 ']' 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:16.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:16.696 17:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:16.696 [2024-11-05 17:07:23.686502] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:43:16.696 [2024-11-05 17:07:23.686553] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:16.957 [2024-11-05 17:07:23.764354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:16.957 [2024-11-05 17:07:23.802639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:16.957 [2024-11-05 17:07:23.802672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:16.957 [2024-11-05 17:07:23.802680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:16.957 [2024-11-05 17:07:23.802687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:16.957 [2024-11-05 17:07:23.802693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:16.957 [2024-11-05 17:07:23.804317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:16.957 [2024-11-05 17:07:23.804432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:16.957 [2024-11-05 17:07:23.804587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.957 [2024-11-05 17:07:23.804588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:17.529 17:07:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.529 ************************************ 00:43:17.529 START TEST spdk_target_abort 00:43:17.529 ************************************ 00:43:17.529 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:43:17.529 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:17.529 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:43:17.529 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:17.529 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:18.101 spdk_targetn1 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:18.101 [2024-11-05 17:07:24.880759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:18.101 [2024-11-05 17:07:24.937060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:18.101 17:07:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:18.101 [2024-11-05 17:07:25.118254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1136 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:43:18.101 [2024-11-05 17:07:25.118283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:008f p:1 m:0 dnr:0 00:43:18.101 [2024-11-05 17:07:25.118411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004abe000 PRP2 0x0 00:43:18.101 [2024-11-05 17:07:25.118421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:43:18.101 [2024-11-05 17:07:25.126218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1432 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:43:18.101 [2024-11-05 17:07:25.126233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b4 p:1 m:0 dnr:0 00:43:18.101 [2024-11-05 17:07:25.152122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2440 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:43:18.101 [2024-11-05 17:07:25.152139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:18.363 [2024-11-05 17:07:25.165766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2944 len:8 PRP1 0x200004abe000 PRP2 0x0 00:43:18.363 [2024-11-05 17:07:25.165783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:18.363 [2024-11-05 17:07:25.189114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3728 len:8 PRP1 0x200004abe000 PRP2 0x0 00:43:18.363 [2024-11-05 17:07:25.189136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d5 p:0 m:0 dnr:0 00:43:21.666 Initializing NVMe Controllers 00:43:21.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:21.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:21.666 Initialization complete. Launching workers. 00:43:21.666 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12420, failed: 6 00:43:21.666 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3008, failed to submit 9418 00:43:21.666 success 726, unsuccessful 2282, failed 0 00:43:21.666 17:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:21.666 17:07:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:21.666 [2024-11-05 17:07:28.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:43:21.666 [2024-11-05 17:07:28.392061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:43:21.666 [2024-11-05 17:07:28.486875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:2552 len:8 PRP1 0x200004e50000 PRP2 0x0 00:43:21.666 [2024-11-05 17:07:28.486902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:21.666 [2024-11-05 17:07:28.549889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:4040 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:43:21.666 [2024-11-05 17:07:28.549915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:24.968 Initializing NVMe Controllers 00:43:24.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:24.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:24.968 Initialization complete. Launching workers. 00:43:24.968 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8541, failed: 3 00:43:24.968 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1201, failed to submit 7343 00:43:24.968 success 306, unsuccessful 895, failed 0 00:43:24.968 17:07:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:24.968 17:07:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:27.514 [2024-11-05 17:07:34.465132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:307640 len:8 PRP1 0x200004b26000 PRP2 0x0 00:43:27.514 [2024-11-05 17:07:34.465172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00ca p:1 m:0 dnr:0 00:43:27.774 Initializing NVMe Controllers 00:43:27.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:27.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:27.774 Initialization complete. Launching workers. 00:43:27.774 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41923, failed: 1 00:43:27.774 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2763, failed to submit 39161 00:43:27.774 success 608, unsuccessful 2155, failed 0 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:27.774 17:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3501172 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3501172 ']' 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3501172 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:29.686 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3501172 00:43:29.687 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:29.687 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:29.687 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3501172' 00:43:29.687 killing process with pid 3501172 00:43:29.687 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3501172 00:43:29.687 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3501172 00:43:29.948 00:43:29.948 real 0m12.204s 00:43:29.948 user 0m49.891s 00:43:29.948 sys 0m1.818s 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:29.948 ************************************ 00:43:29.948 END TEST spdk_target_abort 00:43:29.948 ************************************ 00:43:29.948 17:07:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:29.948 17:07:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:29.948 17:07:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:29.948 17:07:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:29.948 ************************************ 00:43:29.948 START TEST kernel_target_abort 00:43:29.948 ************************************ 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:29.948 17:07:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:33.246 Waiting for block devices as requested 00:43:33.246 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:33.246 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:33.246 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:33.246 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:33.246 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:33.246 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:33.246 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:33.506 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:33.506 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:33.766 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:33.766 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:33.766 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:33.766 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:34.026 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:34.026 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:34.026 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:34.026 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:34.598 No valid GPT data, bailing 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:43:34.598 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:43:34.599 00:43:34.599 Discovery Log Number of Records 2, Generation counter 2 00:43:34.599 =====Discovery Log Entry 0====== 00:43:34.599 trtype: tcp 00:43:34.599 adrfam: ipv4 00:43:34.599 subtype: current discovery subsystem 00:43:34.599 treq: not specified, sq flow control disable supported 00:43:34.599 portid: 1 00:43:34.599 trsvcid: 4420 00:43:34.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:34.599 traddr: 10.0.0.1 00:43:34.599 eflags: none 00:43:34.599 sectype: none 00:43:34.599 =====Discovery Log Entry 1====== 00:43:34.599 trtype: tcp 00:43:34.599 adrfam: ipv4 00:43:34.599 subtype: nvme subsystem 00:43:34.599 treq: not specified, sq flow control disable supported 00:43:34.599 portid: 1 00:43:34.599 trsvcid: 4420 00:43:34.599 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:34.599 traddr: 10.0.0.1 00:43:34.599 eflags: none 00:43:34.599 sectype: none 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:34.599 17:07:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:37.901 Initializing NVMe Controllers 00:43:37.901 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:37.901 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:37.901 Initialization complete. Launching workers. 00:43:37.901 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66748, failed: 0 00:43:37.901 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66748, failed to submit 0 00:43:37.901 success 0, unsuccessful 66748, failed 0 00:43:37.901 17:07:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:37.901 17:07:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:41.202 Initializing NVMe Controllers 00:43:41.202 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:41.202 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:41.202 Initialization complete. Launching workers. 00:43:41.202 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107696, failed: 0 00:43:41.202 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27130, failed to submit 80566 00:43:41.202 success 0, unsuccessful 27130, failed 0 00:43:41.202 17:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:41.202 17:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:44.504 Initializing NVMe Controllers 00:43:44.504 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:44.504 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:44.504 Initialization complete. Launching workers. 00:43:44.504 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101025, failed: 0 00:43:44.504 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25258, failed to submit 75767 00:43:44.504 success 0, unsuccessful 25258, failed 0 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:43:44.504 17:07:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:47.048 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:47.048 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:48.962 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:43:49.224 00:43:49.224 real 0m19.269s 00:43:49.224 user 0m9.266s 00:43:49.224 sys 0m5.501s 00:43:49.224 17:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:49.224 17:07:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:49.224 ************************************ 00:43:49.224 END TEST kernel_target_abort 00:43:49.224 ************************************ 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:43:49.224 rmmod nvme_tcp 00:43:49.224 rmmod nvme_fabrics 00:43:49.224 rmmod nvme_keyring 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 3501172 ']' 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 3501172 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3501172 ']' 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3501172 00:43:49.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3501172) - No such process 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3501172 is not found' 00:43:49.224 Process with pid 3501172 is not found 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:43:49.224 17:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:52.524 Waiting for block devices as requested 00:43:52.844 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:52.844 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:52.844 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:52.844 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:53.140 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:53.140 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:53.140 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:53.442 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:53.442 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:53.442 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:53.731 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:53.731 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:53.731 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:53.731 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:53.731 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:53.992 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:53.992 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:54.253 17:08:01 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:43:54.253 17:08:01 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:43:54.253 17:08:01 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:43:54.253 17:08:01 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:54.253 17:08:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:54.253 17:08:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:43:56.798 00:43:56.798 real 0m51.068s 00:43:56.798 user 1m4.612s 00:43:56.798 sys 0m18.185s 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:56.798 17:08:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:56.798 ************************************ 00:43:56.798 END TEST nvmf_abort_qd_sizes 00:43:56.798 ************************************ 00:43:56.798 17:08:03 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:56.798 17:08:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:56.798 17:08:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:56.798 17:08:03 -- common/autotest_common.sh@10 -- # set +x 00:43:56.798 ************************************ 00:43:56.798 START TEST keyring_file 00:43:56.798 ************************************ 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:56.798 * Looking for test storage... 00:43:56.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.798 17:08:03 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.798 --rc genhtml_branch_coverage=1 00:43:56.798 --rc genhtml_function_coverage=1 00:43:56.798 --rc genhtml_legend=1 00:43:56.798 --rc geninfo_all_blocks=1 00:43:56.798 --rc geninfo_unexecuted_blocks=1 00:43:56.798 00:43:56.798 ' 00:43:56.798 17:08:03 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.798 --rc genhtml_branch_coverage=1 00:43:56.799 --rc genhtml_function_coverage=1 00:43:56.799 --rc genhtml_legend=1 00:43:56.799 --rc geninfo_all_blocks=1 00:43:56.799 --rc geninfo_unexecuted_blocks=1 00:43:56.799 00:43:56.799 ' 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.799 --rc genhtml_branch_coverage=1 00:43:56.799 --rc genhtml_function_coverage=1 00:43:56.799 --rc genhtml_legend=1 00:43:56.799 --rc geninfo_all_blocks=1 00:43:56.799 --rc geninfo_unexecuted_blocks=1 00:43:56.799 00:43:56.799 ' 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.799 --rc genhtml_branch_coverage=1 00:43:56.799 --rc genhtml_function_coverage=1 00:43:56.799 --rc genhtml_legend=1 00:43:56.799 --rc geninfo_all_blocks=1 00:43:56.799 --rc geninfo_unexecuted_blocks=1 00:43:56.799 00:43:56.799 ' 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:56.799 17:08:03 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:56.799 17:08:03 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:56.799 17:08:03 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:56.799 17:08:03 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:56.799 17:08:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.799 17:08:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.799 17:08:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.799 17:08:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:56.799 17:08:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:43:56.799 17:08:03 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:56.799 17:08:03 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:56.799 17:08:03 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@50 -- # : 0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:56.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1xMLNJpsEs 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@507 -- # python - 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1xMLNJpsEs 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1xMLNJpsEs 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1xMLNJpsEs 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6NI92n1wOQ 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:43:56.799 17:08:03 keyring_file -- nvmf/common.sh@507 -- # python - 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6NI92n1wOQ 00:43:56.799 17:08:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6NI92n1wOQ 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6NI92n1wOQ 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=3511357 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3511357 00:43:56.799 17:08:03 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3511357 ']' 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:56.799 17:08:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.799 [2024-11-05 17:08:03.760981] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:43:56.799 [2024-11-05 17:08:03.761038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511357 ] 00:43:56.799 [2024-11-05 17:08:03.831770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:57.060 [2024-11-05 17:08:03.868148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:43:57.632 17:08:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:57.632 [2024-11-05 17:08:04.558218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.632 null0 00:43:57.632 [2024-11-05 17:08:04.590263] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:57.632 [2024-11-05 17:08:04.590503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:57.632 17:08:04 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:57.632 [2024-11-05 17:08:04.622334] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:57.632 request: 00:43:57.632 { 00:43:57.632 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:57.632 "secure_channel": false, 00:43:57.632 "listen_address": { 00:43:57.632 "trtype": "tcp", 00:43:57.632 "traddr": "127.0.0.1", 00:43:57.632 "trsvcid": "4420" 00:43:57.632 }, 00:43:57.632 "method": "nvmf_subsystem_add_listener", 00:43:57.632 "req_id": 1 00:43:57.632 } 00:43:57.632 Got JSON-RPC error response 00:43:57.632 response: 00:43:57.632 { 00:43:57.632 "code": -32602, 00:43:57.632 "message": "Invalid parameters" 00:43:57.632 } 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:57.632 17:08:04 keyring_file -- keyring/file.sh@47 -- # bperfpid=3511425 00:43:57.632 17:08:04 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3511425 /var/tmp/bperf.sock 00:43:57.632 17:08:04 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3511425 ']' 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:57.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:57.632 17:08:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:57.632 [2024-11-05 17:08:04.679163] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:43:57.632 [2024-11-05 17:08:04.679211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511425 ] 00:43:57.894 [2024-11-05 17:08:04.767629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:57.894 [2024-11-05 17:08:04.803258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:58.465 17:08:05 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:58.465 17:08:05 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:43:58.466 17:08:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:43:58.466 17:08:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:43:58.726 17:08:05 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6NI92n1wOQ 00:43:58.726 17:08:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6NI92n1wOQ 00:43:58.987 17:08:05 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:58.987 17:08:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:58.987 17:08:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.987 17:08:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.987 17:08:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:58.987 17:08:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1xMLNJpsEs == \/\t\m\p\/\t\m\p\.\1\x\M\L\N\J\p\s\E\s ]] 00:43:58.987 17:08:05 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:58.987 17:08:05 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:58.987 17:08:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.987 17:08:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.987 17:08:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:59.247 17:08:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6NI92n1wOQ == \/\t\m\p\/\t\m\p\.\6\N\I\9\2\n\1\w\O\Q ]] 00:43:59.247 17:08:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:59.247 17:08:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:59.247 17:08:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:59.247 17:08:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:59.247 17:08:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:59.247 17:08:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:59.507 17:08:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:59.507 17:08:06 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:59.507 17:08:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:59.507 17:08:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:59.507 17:08:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:59.507 17:08:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:59.507 17:08:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:59.507 17:08:06 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:59.507 17:08:06 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.507 17:08:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.767 [2024-11-05 17:08:06.657599] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:59.767 nvme0n1 00:43:59.767 17:08:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:59.767 17:08:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:59.767 17:08:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:59.767 17:08:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:59.767 17:08:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:59.767 17:08:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.028 17:08:06 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:00.028 17:08:06 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:00.028 17:08:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:00.028 17:08:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.028 17:08:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.028 17:08:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:00.028 17:08:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.288 17:08:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:00.288 17:08:07 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:00.288 Running I/O for 1 seconds... 00:44:01.227 15781.00 IOPS, 61.64 MiB/s 00:44:01.228 Latency(us) 00:44:01.228 [2024-11-05T16:08:08.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:01.228 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:01.228 nvme0n1 : 1.01 15794.10 61.70 0.00 0.00 8072.87 3522.56 13380.27 00:44:01.228 [2024-11-05T16:08:08.291Z] =================================================================================================================== 00:44:01.228 [2024-11-05T16:08:08.291Z] Total : 15794.10 61.70 0.00 0.00 8072.87 3522.56 13380.27 00:44:01.228 { 00:44:01.228 "results": [ 00:44:01.228 { 00:44:01.228 "job": "nvme0n1", 00:44:01.228 "core_mask": "0x2", 00:44:01.228 "workload": "randrw", 00:44:01.228 "percentage": 50, 00:44:01.228 "status": "finished", 00:44:01.228 "queue_depth": 128, 00:44:01.228 "io_size": 4096, 00:44:01.228 "runtime": 1.007275, 00:44:01.228 "iops": 15794.097937504654, 00:44:01.228 "mibps": 61.69569506837755, 00:44:01.228 "io_failed": 0, 00:44:01.228 "io_timeout": 0, 00:44:01.228 "avg_latency_us": 8072.866105977749, 00:44:01.228 "min_latency_us": 3522.56, 00:44:01.228 "max_latency_us": 13380.266666666666 00:44:01.228 } 00:44:01.228 ], 00:44:01.228 "core_count": 1 00:44:01.228 } 00:44:01.228 17:08:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:01.228 17:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:01.489 17:08:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:01.489 17:08:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:01.489 17:08:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.489 17:08:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.489 17:08:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:01.489 17:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.749 17:08:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:01.749 17:08:08 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:01.749 17:08:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:01.749 17:08:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.749 17:08:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.749 17:08:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:01.749 17:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.749 17:08:08 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:01.749 17:08:08 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:01.749 17:08:08 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:01.749 17:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:02.010 [2024-11-05 17:08:08.926240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:02.010 [2024-11-05 17:08:08.926975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe4c10 (107): Transport endpoint is not connected 00:44:02.010 [2024-11-05 17:08:08.927970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe4c10 (9): Bad file descriptor 00:44:02.010 [2024-11-05 17:08:08.928972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:02.010 [2024-11-05 17:08:08.928986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:02.010 [2024-11-05 17:08:08.928992] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:02.010 [2024-11-05 17:08:08.929000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:02.010 request: 00:44:02.010 { 00:44:02.010 "name": "nvme0", 00:44:02.010 "trtype": "tcp", 00:44:02.010 "traddr": "127.0.0.1", 00:44:02.010 "adrfam": "ipv4", 00:44:02.010 "trsvcid": "4420", 00:44:02.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.010 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:02.010 "prchk_reftag": false, 00:44:02.010 "prchk_guard": false, 00:44:02.010 "hdgst": false, 00:44:02.010 "ddgst": false, 00:44:02.010 "psk": "key1", 00:44:02.010 "allow_unrecognized_csi": false, 00:44:02.010 "method": "bdev_nvme_attach_controller", 00:44:02.010 "req_id": 1 00:44:02.010 } 00:44:02.010 Got JSON-RPC error response 00:44:02.010 response: 00:44:02.010 { 00:44:02.010 "code": -5, 00:44:02.010 "message": "Input/output error" 00:44:02.010 } 00:44:02.010 17:08:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:02.010 17:08:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:02.010 17:08:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:02.010 17:08:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:02.010 17:08:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:02.010 17:08:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:02.010 17:08:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:02.010 17:08:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:02.010 17:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.010 17:08:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:02.270 17:08:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:02.270 17:08:09 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:02.270 17:08:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:02.270 17:08:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:02.270 17:08:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:02.270 17:08:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:02.270 17:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.270 17:08:09 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:02.270 17:08:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:02.270 17:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:02.530 17:08:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:02.530 17:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:02.790 17:08:09 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:02.790 17:08:09 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:02.790 17:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.790 17:08:09 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:02.790 17:08:09 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.1xMLNJpsEs 00:44:02.790 17:08:09 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:02.790 17:08:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:44:02.790 17:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:44:03.050 [2024-11-05 17:08:09.957492] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1xMLNJpsEs': 0100660 00:44:03.050 [2024-11-05 17:08:09.957509] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:03.050 request: 00:44:03.050 { 00:44:03.050 "name": "key0", 00:44:03.050 "path": "/tmp/tmp.1xMLNJpsEs", 00:44:03.050 "method": "keyring_file_add_key", 00:44:03.050 "req_id": 1 00:44:03.050 } 00:44:03.050 Got JSON-RPC error response 00:44:03.050 response: 00:44:03.050 { 00:44:03.050 "code": -1, 00:44:03.050 "message": "Operation not permitted" 00:44:03.050 } 00:44:03.050 17:08:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:03.050 17:08:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:03.050 17:08:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:03.050 17:08:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:03.050 17:08:09 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.1xMLNJpsEs 00:44:03.050 17:08:09 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:44:03.050 17:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1xMLNJpsEs 00:44:03.310 17:08:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.1xMLNJpsEs 00:44:03.310 17:08:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:03.310 17:08:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:03.310 17:08:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.310 17:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.310 17:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.310 17:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.310 17:08:10 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:03.310 17:08:10 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:03.310 17:08:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.310 17:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.570 [2024-11-05 17:08:10.483274] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1xMLNJpsEs': No such file or directory 00:44:03.570 [2024-11-05 17:08:10.483290] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:03.570 [2024-11-05 17:08:10.483304] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:03.570 [2024-11-05 17:08:10.483309] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:03.570 [2024-11-05 17:08:10.483315] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:03.570 [2024-11-05 17:08:10.483320] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:03.570 request: 00:44:03.570 { 00:44:03.570 "name": "nvme0", 00:44:03.570 "trtype": "tcp", 00:44:03.570 "traddr": "127.0.0.1", 00:44:03.570 "adrfam": "ipv4", 00:44:03.570 "trsvcid": "4420", 00:44:03.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:03.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:03.570 "prchk_reftag": false, 00:44:03.570 "prchk_guard": false, 00:44:03.570 "hdgst": false, 00:44:03.570 "ddgst": false, 00:44:03.570 "psk": "key0", 00:44:03.570 "allow_unrecognized_csi": false, 00:44:03.570 "method": "bdev_nvme_attach_controller", 00:44:03.570 "req_id": 1 00:44:03.570 } 00:44:03.570 Got JSON-RPC error response 00:44:03.570 response: 00:44:03.570 { 00:44:03.570 "code": -19, 00:44:03.570 "message": "No such device" 00:44:03.570 } 00:44:03.570 17:08:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:03.570 17:08:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:03.570 17:08:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:03.570 17:08:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:03.570 17:08:10 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:03.570 17:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:03.831 17:08:10 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GTGigZfy8A 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:03.831 17:08:10 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:03.831 17:08:10 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:03.831 17:08:10 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:03.831 17:08:10 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:03.831 17:08:10 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:03.831 17:08:10 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GTGigZfy8A 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GTGigZfy8A 00:44:03.831 17:08:10 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.GTGigZfy8A 00:44:03.831 17:08:10 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GTGigZfy8A 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GTGigZfy8A 00:44:03.831 17:08:10 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:03.831 17:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:04.091 nvme0n1 00:44:04.091 17:08:11 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:04.091 17:08:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:04.091 17:08:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:04.091 17:08:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.091 17:08:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:04.091 17:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.351 17:08:11 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:04.351 17:08:11 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:04.351 17:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:04.611 17:08:11 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:04.611 17:08:11 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:04.611 17:08:11 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:04.611 17:08:11 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.611 17:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.871 17:08:11 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:04.871 17:08:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:04.871 17:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:05.131 17:08:11 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:05.131 17:08:11 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:05.131 17:08:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:05.131 17:08:12 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:05.131 17:08:12 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GTGigZfy8A 00:44:05.131 17:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GTGigZfy8A 00:44:05.391 17:08:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6NI92n1wOQ 00:44:05.391 17:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6NI92n1wOQ 00:44:05.652 17:08:12 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.652 17:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.652 nvme0n1 00:44:05.652 17:08:12 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:05.652 17:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:05.912 17:08:12 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:05.912 "subsystems": [ 00:44:05.912 { 00:44:05.912 "subsystem": "keyring", 00:44:05.912 "config": [ 00:44:05.912 { 00:44:05.912 "method": "keyring_file_add_key", 00:44:05.912 "params": { 00:44:05.912 "name": "key0", 00:44:05.912 "path": "/tmp/tmp.GTGigZfy8A" 00:44:05.912 } 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "method": "keyring_file_add_key", 00:44:05.912 "params": { 00:44:05.912 "name": "key1", 00:44:05.912 "path": "/tmp/tmp.6NI92n1wOQ" 00:44:05.912 } 00:44:05.912 } 00:44:05.912 ] 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "subsystem": "iobuf", 00:44:05.912 "config": [ 00:44:05.912 { 00:44:05.912 "method": "iobuf_set_options", 00:44:05.912 "params": { 00:44:05.912 "small_pool_count": 8192, 00:44:05.912 "large_pool_count": 1024, 00:44:05.912 "small_bufsize": 8192, 00:44:05.912 "large_bufsize": 135168, 00:44:05.912 "enable_numa": false 00:44:05.912 } 00:44:05.912 } 00:44:05.912 ] 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "subsystem": "sock", 00:44:05.912 "config": [ 00:44:05.912 { 00:44:05.912 "method": "sock_set_default_impl", 00:44:05.912 "params": { 00:44:05.912 "impl_name": "posix" 00:44:05.912 } 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "method": "sock_impl_set_options", 00:44:05.912 "params": { 00:44:05.912 "impl_name": "ssl", 00:44:05.912 "recv_buf_size": 4096, 00:44:05.912 "send_buf_size": 4096, 00:44:05.912 "enable_recv_pipe": true, 00:44:05.912 "enable_quickack": false, 00:44:05.912 "enable_placement_id": 0, 00:44:05.912 "enable_zerocopy_send_server": true, 00:44:05.912 "enable_zerocopy_send_client": false, 00:44:05.912 "zerocopy_threshold": 0, 00:44:05.912 "tls_version": 0, 00:44:05.912 "enable_ktls": false 00:44:05.912 } 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "method": "sock_impl_set_options", 00:44:05.912 "params": { 00:44:05.912 "impl_name": "posix", 00:44:05.912 "recv_buf_size": 2097152, 00:44:05.912 "send_buf_size": 2097152, 00:44:05.912 "enable_recv_pipe": true, 00:44:05.912 "enable_quickack": false, 00:44:05.912 "enable_placement_id": 0, 00:44:05.912 "enable_zerocopy_send_server": true, 00:44:05.912 "enable_zerocopy_send_client": false, 00:44:05.912 "zerocopy_threshold": 0, 00:44:05.912 "tls_version": 0, 00:44:05.912 "enable_ktls": false 00:44:05.912 } 00:44:05.912 } 00:44:05.912 ] 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "subsystem": "vmd", 00:44:05.912 "config": [] 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "subsystem": "accel", 00:44:05.912 "config": [ 00:44:05.912 { 00:44:05.912 "method": "accel_set_options", 00:44:05.912 "params": { 00:44:05.912 "small_cache_size": 128, 00:44:05.912 "large_cache_size": 16, 00:44:05.912 "task_count": 2048, 00:44:05.912 "sequence_count": 2048, 00:44:05.912 "buf_count": 2048 00:44:05.912 } 00:44:05.912 } 00:44:05.912 ] 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "subsystem": "bdev", 00:44:05.912 "config": [ 00:44:05.912 { 00:44:05.912 "method": "bdev_set_options", 00:44:05.912 "params": { 00:44:05.912 "bdev_io_pool_size": 65535, 00:44:05.912 "bdev_io_cache_size": 256, 00:44:05.912 "bdev_auto_examine": true, 00:44:05.912 "iobuf_small_cache_size": 128, 00:44:05.912 "iobuf_large_cache_size": 16 00:44:05.912 } 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "method": "bdev_raid_set_options", 00:44:05.912 "params": { 00:44:05.912 "process_window_size_kb": 1024, 00:44:05.912 "process_max_bandwidth_mb_sec": 0 00:44:05.912 } 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "method": "bdev_iscsi_set_options", 00:44:05.912 "params": { 00:44:05.912 "timeout_sec": 30 00:44:05.912 } 00:44:05.912 }, 00:44:05.912 { 00:44:05.912 "method": "bdev_nvme_set_options", 00:44:05.912 "params": { 00:44:05.912 "action_on_timeout": "none", 00:44:05.912 "timeout_us": 0, 00:44:05.912 "timeout_admin_us": 0, 00:44:05.912 "keep_alive_timeout_ms": 10000, 00:44:05.912 "arbitration_burst": 0, 00:44:05.912 "low_priority_weight": 0, 00:44:05.912 "medium_priority_weight": 0, 00:44:05.912 "high_priority_weight": 0, 00:44:05.912 "nvme_adminq_poll_period_us": 10000, 00:44:05.912 "nvme_ioq_poll_period_us": 0, 00:44:05.912 "io_queue_requests": 512, 00:44:05.912 "delay_cmd_submit": true, 00:44:05.912 "transport_retry_count": 4, 00:44:05.912 "bdev_retry_count": 3, 00:44:05.912 "transport_ack_timeout": 0, 00:44:05.912 "ctrlr_loss_timeout_sec": 0, 00:44:05.912 "reconnect_delay_sec": 0, 00:44:05.912 "fast_io_fail_timeout_sec": 0, 00:44:05.913 "disable_auto_failback": false, 00:44:05.913 "generate_uuids": false, 00:44:05.913 "transport_tos": 0, 00:44:05.913 "nvme_error_stat": false, 00:44:05.913 "rdma_srq_size": 0, 00:44:05.913 "io_path_stat": false, 00:44:05.913 "allow_accel_sequence": false, 00:44:05.913 "rdma_max_cq_size": 0, 00:44:05.913 "rdma_cm_event_timeout_ms": 0, 00:44:05.913 "dhchap_digests": [ 00:44:05.913 "sha256", 00:44:05.913 "sha384", 00:44:05.913 "sha512" 00:44:05.913 ], 00:44:05.913 "dhchap_dhgroups": [ 00:44:05.913 "null", 00:44:05.913 "ffdhe2048", 00:44:05.913 "ffdhe3072", 00:44:05.913 "ffdhe4096", 00:44:05.913 "ffdhe6144", 00:44:05.913 "ffdhe8192" 00:44:05.913 ] 00:44:05.913 } 00:44:05.913 }, 00:44:05.913 { 00:44:05.913 "method": "bdev_nvme_attach_controller", 00:44:05.913 "params": { 00:44:05.913 "name": "nvme0", 00:44:05.913 "trtype": "TCP", 00:44:05.913 "adrfam": "IPv4", 00:44:05.913 "traddr": "127.0.0.1", 00:44:05.913 "trsvcid": "4420", 00:44:05.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.913 "prchk_reftag": false, 00:44:05.913 "prchk_guard": false, 00:44:05.913 "ctrlr_loss_timeout_sec": 0, 00:44:05.913 "reconnect_delay_sec": 0, 00:44:05.913 "fast_io_fail_timeout_sec": 0, 00:44:05.913 "psk": "key0", 00:44:05.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.913 "hdgst": false, 00:44:05.913 "ddgst": false, 00:44:05.913 "multipath": "multipath" 00:44:05.913 } 00:44:05.913 }, 00:44:05.913 { 00:44:05.913 "method": "bdev_nvme_set_hotplug", 00:44:05.913 "params": { 00:44:05.913 "period_us": 100000, 00:44:05.913 "enable": false 00:44:05.913 } 00:44:05.913 }, 00:44:05.913 { 00:44:05.913 "method": "bdev_wait_for_examine" 00:44:05.913 } 00:44:05.913 ] 00:44:05.913 }, 00:44:05.913 { 00:44:05.913 "subsystem": "nbd", 00:44:05.913 "config": [] 00:44:05.913 } 00:44:05.913 ] 00:44:05.913 }' 00:44:05.913 17:08:12 keyring_file -- keyring/file.sh@115 -- # killprocess 3511425 00:44:05.913 17:08:12 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3511425 ']' 00:44:05.913 17:08:12 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3511425 00:44:05.913 17:08:12 keyring_file -- common/autotest_common.sh@957 -- # uname 00:44:05.913 17:08:12 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:05.913 17:08:12 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3511425 00:44:06.172 17:08:12 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3511425' 00:44:06.172 killing process with pid 3511425 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@971 -- # kill 3511425 00:44:06.172 Received shutdown signal, test time was about 1.000000 seconds 00:44:06.172 00:44:06.172 Latency(us) 00:44:06.172 [2024-11-05T16:08:13.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:06.172 [2024-11-05T16:08:13.235Z] =================================================================================================================== 00:44:06.172 [2024-11-05T16:08:13.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@976 -- # wait 3511425 00:44:06.172 17:08:13 keyring_file -- keyring/file.sh@118 -- # bperfpid=3513236 00:44:06.172 17:08:13 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3513236 /var/tmp/bperf.sock 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3513236 ']' 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:06.172 17:08:13 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:06.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:06.172 17:08:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:06.172 17:08:13 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:06.172 "subsystems": [ 00:44:06.172 { 00:44:06.172 "subsystem": "keyring", 00:44:06.172 "config": [ 00:44:06.172 { 00:44:06.172 "method": "keyring_file_add_key", 00:44:06.172 "params": { 00:44:06.172 "name": "key0", 00:44:06.172 "path": "/tmp/tmp.GTGigZfy8A" 00:44:06.172 } 00:44:06.172 }, 00:44:06.172 { 00:44:06.172 "method": "keyring_file_add_key", 00:44:06.172 "params": { 00:44:06.172 "name": "key1", 00:44:06.172 "path": "/tmp/tmp.6NI92n1wOQ" 00:44:06.172 } 00:44:06.172 } 00:44:06.172 ] 00:44:06.172 }, 00:44:06.172 { 00:44:06.172 "subsystem": "iobuf", 00:44:06.172 "config": [ 00:44:06.172 { 00:44:06.172 "method": "iobuf_set_options", 00:44:06.172 "params": { 00:44:06.172 "small_pool_count": 8192, 00:44:06.172 "large_pool_count": 1024, 00:44:06.172 "small_bufsize": 8192, 00:44:06.172 "large_bufsize": 135168, 00:44:06.172 "enable_numa": false 00:44:06.172 } 00:44:06.172 } 00:44:06.172 ] 00:44:06.172 }, 00:44:06.172 { 00:44:06.172 "subsystem": "sock", 00:44:06.172 "config": [ 00:44:06.172 { 00:44:06.172 "method": "sock_set_default_impl", 00:44:06.172 "params": { 00:44:06.172 "impl_name": "posix" 00:44:06.172 } 00:44:06.172 }, 00:44:06.172 { 00:44:06.172 "method": "sock_impl_set_options", 00:44:06.172 "params": { 00:44:06.172 "impl_name": "ssl", 00:44:06.172 "recv_buf_size": 4096, 00:44:06.172 "send_buf_size": 4096, 00:44:06.173 "enable_recv_pipe": true, 00:44:06.173 "enable_quickack": false, 00:44:06.173 "enable_placement_id": 0, 00:44:06.173 "enable_zerocopy_send_server": true, 00:44:06.173 "enable_zerocopy_send_client": false, 00:44:06.173 "zerocopy_threshold": 0, 00:44:06.173 "tls_version": 0, 00:44:06.173 "enable_ktls": false 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "sock_impl_set_options", 00:44:06.173 "params": { 00:44:06.173 "impl_name": "posix", 00:44:06.173 "recv_buf_size": 2097152, 00:44:06.173 "send_buf_size": 2097152, 00:44:06.173 "enable_recv_pipe": true, 00:44:06.173 "enable_quickack": false, 00:44:06.173 "enable_placement_id": 0, 00:44:06.173 "enable_zerocopy_send_server": true, 00:44:06.173 "enable_zerocopy_send_client": false, 00:44:06.173 "zerocopy_threshold": 0, 00:44:06.173 "tls_version": 0, 00:44:06.173 "enable_ktls": false 00:44:06.173 } 00:44:06.173 } 00:44:06.173 ] 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "subsystem": "vmd", 00:44:06.173 "config": [] 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "subsystem": "accel", 00:44:06.173 "config": [ 00:44:06.173 { 00:44:06.173 "method": "accel_set_options", 00:44:06.173 "params": { 00:44:06.173 "small_cache_size": 128, 00:44:06.173 "large_cache_size": 16, 00:44:06.173 "task_count": 2048, 00:44:06.173 "sequence_count": 2048, 00:44:06.173 "buf_count": 2048 00:44:06.173 } 00:44:06.173 } 00:44:06.173 ] 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "subsystem": "bdev", 00:44:06.173 "config": [ 00:44:06.173 { 00:44:06.173 "method": "bdev_set_options", 00:44:06.173 "params": { 00:44:06.173 "bdev_io_pool_size": 65535, 00:44:06.173 "bdev_io_cache_size": 256, 00:44:06.173 "bdev_auto_examine": true, 00:44:06.173 "iobuf_small_cache_size": 128, 00:44:06.173 "iobuf_large_cache_size": 16 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "bdev_raid_set_options", 00:44:06.173 "params": { 00:44:06.173 "process_window_size_kb": 1024, 00:44:06.173 "process_max_bandwidth_mb_sec": 0 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "bdev_iscsi_set_options", 00:44:06.173 "params": { 00:44:06.173 "timeout_sec": 30 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "bdev_nvme_set_options", 00:44:06.173 "params": { 00:44:06.173 "action_on_timeout": "none", 00:44:06.173 "timeout_us": 0, 00:44:06.173 "timeout_admin_us": 0, 00:44:06.173 "keep_alive_timeout_ms": 10000, 00:44:06.173 "arbitration_burst": 0, 00:44:06.173 "low_priority_weight": 0, 00:44:06.173 "medium_priority_weight": 0, 00:44:06.173 "high_priority_weight": 0, 00:44:06.173 "nvme_adminq_poll_period_us": 10000, 00:44:06.173 "nvme_ioq_poll_period_us": 0, 00:44:06.173 "io_queue_requests": 512, 00:44:06.173 "delay_cmd_submit": true, 00:44:06.173 "transport_retry_count": 4, 00:44:06.173 "bdev_retry_count": 3, 00:44:06.173 "transport_ack_timeout": 0, 00:44:06.173 "ctrlr_loss_timeout_sec": 0, 00:44:06.173 "reconnect_delay_sec": 0, 00:44:06.173 "fast_io_fail_timeout_sec": 0, 00:44:06.173 "disable_auto_failback": false, 00:44:06.173 "generate_uuids": false, 00:44:06.173 "transport_tos": 0, 00:44:06.173 "nvme_error_stat": false, 00:44:06.173 "rdma_srq_size": 0, 00:44:06.173 "io_path_stat": false, 00:44:06.173 "allow_accel_sequence": false, 00:44:06.173 "rdma_max_cq_size": 0, 00:44:06.173 "rdma_cm_event_timeout_ms": 0, 00:44:06.173 "dhchap_digests": [ 00:44:06.173 "sha256", 00:44:06.173 "sha384", 00:44:06.173 "sha512" 00:44:06.173 ], 00:44:06.173 "dhchap_dhgroups": [ 00:44:06.173 "null", 00:44:06.173 "ffdhe2048", 00:44:06.173 "ffdhe3072", 00:44:06.173 "ffdhe4096", 00:44:06.173 "ffdhe6144", 00:44:06.173 "ffdhe8192" 00:44:06.173 ] 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "bdev_nvme_attach_controller", 00:44:06.173 "params": { 00:44:06.173 "name": "nvme0", 00:44:06.173 "trtype": "TCP", 00:44:06.173 "adrfam": "IPv4", 00:44:06.173 "traddr": "127.0.0.1", 00:44:06.173 "trsvcid": "4420", 00:44:06.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:06.173 "prchk_reftag": false, 00:44:06.173 "prchk_guard": false, 00:44:06.173 "ctrlr_loss_timeout_sec": 0, 00:44:06.173 "reconnect_delay_sec": 0, 00:44:06.173 "fast_io_fail_timeout_sec": 0, 00:44:06.173 "psk": "key0", 00:44:06.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:06.173 "hdgst": false, 00:44:06.173 "ddgst": false, 00:44:06.173 "multipath": "multipath" 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "bdev_nvme_set_hotplug", 00:44:06.173 "params": { 00:44:06.173 "period_us": 100000, 00:44:06.173 "enable": false 00:44:06.173 } 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "method": "bdev_wait_for_examine" 00:44:06.173 } 00:44:06.173 ] 00:44:06.173 }, 00:44:06.173 { 00:44:06.173 "subsystem": "nbd", 00:44:06.173 "config": [] 00:44:06.173 } 00:44:06.173 ] 00:44:06.173 }' 00:44:06.173 [2024-11-05 17:08:13.157122] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:44:06.173 [2024-11-05 17:08:13.157181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513236 ] 00:44:06.432 [2024-11-05 17:08:13.240697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:06.432 [2024-11-05 17:08:13.269966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.432 [2024-11-05 17:08:13.412955] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:07.003 17:08:13 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:07.003 17:08:13 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:44:07.003 17:08:13 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:07.003 17:08:13 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:07.003 17:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.264 17:08:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:07.264 17:08:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.264 17:08:14 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:07.264 17:08:14 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.264 17:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:07.525 17:08:14 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:07.525 17:08:14 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:07.525 17:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:07.525 17:08:14 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:07.785 17:08:14 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:07.785 17:08:14 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:07.785 17:08:14 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.GTGigZfy8A /tmp/tmp.6NI92n1wOQ 00:44:07.785 17:08:14 keyring_file -- keyring/file.sh@20 -- # killprocess 3513236 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3513236 ']' 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3513236 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@957 -- # uname 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3513236 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3513236' 00:44:07.785 killing process with pid 3513236 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@971 -- # kill 3513236 00:44:07.785 Received shutdown signal, test time was about 1.000000 seconds 00:44:07.785 00:44:07.785 Latency(us) 00:44:07.785 [2024-11-05T16:08:14.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:07.785 [2024-11-05T16:08:14.848Z] =================================================================================================================== 00:44:07.785 [2024-11-05T16:08:14.848Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@976 -- # wait 3513236 00:44:07.785 17:08:14 keyring_file -- keyring/file.sh@21 -- # killprocess 3511357 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3511357 ']' 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3511357 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@957 -- # uname 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:07.785 17:08:14 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3511357 00:44:08.045 17:08:14 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:08.045 17:08:14 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:08.045 17:08:14 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3511357' 00:44:08.045 killing process with pid 3511357 00:44:08.045 17:08:14 keyring_file -- common/autotest_common.sh@971 -- # kill 3511357 00:44:08.045 17:08:14 keyring_file -- common/autotest_common.sh@976 -- # wait 3511357 00:44:08.045 00:44:08.045 real 0m11.697s 00:44:08.045 user 0m28.069s 00:44:08.045 sys 0m2.619s 00:44:08.045 17:08:15 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:08.045 17:08:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:08.045 ************************************ 00:44:08.045 END TEST keyring_file 00:44:08.045 ************************************ 00:44:08.045 17:08:15 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:44:08.045 17:08:15 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:08.045 17:08:15 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:44:08.045 17:08:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:08.045 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:44:08.306 ************************************ 00:44:08.306 START TEST keyring_linux 00:44:08.306 ************************************ 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:08.306 Joined session keyring: 645995155 00:44:08.306 * Looking for test storage... 00:44:08.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:08.306 17:08:15 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.306 --rc genhtml_branch_coverage=1 00:44:08.306 --rc genhtml_function_coverage=1 00:44:08.306 --rc genhtml_legend=1 00:44:08.306 --rc geninfo_all_blocks=1 00:44:08.306 --rc geninfo_unexecuted_blocks=1 00:44:08.306 00:44:08.306 ' 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.306 --rc genhtml_branch_coverage=1 00:44:08.306 --rc genhtml_function_coverage=1 00:44:08.306 --rc genhtml_legend=1 00:44:08.306 --rc geninfo_all_blocks=1 00:44:08.306 --rc geninfo_unexecuted_blocks=1 00:44:08.306 00:44:08.306 ' 00:44:08.306 17:08:15 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.307 --rc genhtml_branch_coverage=1 00:44:08.307 --rc genhtml_function_coverage=1 00:44:08.307 --rc genhtml_legend=1 00:44:08.307 --rc geninfo_all_blocks=1 00:44:08.307 --rc geninfo_unexecuted_blocks=1 00:44:08.307 00:44:08.307 ' 00:44:08.307 17:08:15 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:08.307 --rc genhtml_branch_coverage=1 00:44:08.307 --rc genhtml_function_coverage=1 00:44:08.307 --rc genhtml_legend=1 00:44:08.307 --rc geninfo_all_blocks=1 00:44:08.307 --rc geninfo_unexecuted_blocks=1 00:44:08.307 00:44:08.307 ' 00:44:08.307 17:08:15 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:08.307 17:08:15 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:08.307 17:08:15 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:08.307 17:08:15 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:08.307 17:08:15 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:08.307 17:08:15 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:08.307 17:08:15 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.307 17:08:15 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.307 17:08:15 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.307 17:08:15 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:08.307 17:08:15 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:08.307 17:08:15 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:44:08.568 17:08:15 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:44:08.568 17:08:15 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:08.568 17:08:15 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:44:08.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@507 -- # python - 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:08.568 /tmp/:spdk-test:key0 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:44:08.568 17:08:15 keyring_linux -- nvmf/common.sh@507 -- # python - 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:08.568 17:08:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:08.568 /tmp/:spdk-test:key1 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3513676 00:44:08.568 17:08:15 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3513676 00:44:08.568 17:08:15 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3513676 ']' 00:44:08.568 17:08:15 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:08.568 17:08:15 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:08.568 17:08:15 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:08.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:08.568 17:08:15 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:08.568 17:08:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:08.568 [2024-11-05 17:08:15.520206] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:44:08.568 [2024-11-05 17:08:15.520282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513676 ] 00:44:08.568 [2024-11-05 17:08:15.595242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.828 [2024-11-05 17:08:15.637171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:08.828 17:08:15 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:08.828 17:08:15 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:44:08.828 17:08:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:08.828 17:08:15 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.828 17:08:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:08.828 [2024-11-05 17:08:15.831878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:08.828 null0 00:44:08.828 [2024-11-05 17:08:15.863908] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:08.828 [2024-11-05 17:08:15.864299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:08.828 17:08:15 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.828 17:08:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:08.828 108805769 00:44:08.828 17:08:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:08.828 752303045 00:44:09.088 17:08:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3513705 00:44:09.088 17:08:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3513705 /var/tmp/bperf.sock 00:44:09.088 17:08:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:09.088 17:08:15 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3513705 ']' 00:44:09.088 17:08:15 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:09.088 17:08:15 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:09.088 17:08:15 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:09.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:09.088 17:08:15 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:09.088 17:08:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:09.088 [2024-11-05 17:08:15.951742] Starting SPDK v25.01-pre git sha1 dbbc706e0 / DPDK 24.03.0 initialization... 00:44:09.088 [2024-11-05 17:08:15.951810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513705 ] 00:44:09.088 [2024-11-05 17:08:16.036148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:09.088 [2024-11-05 17:08:16.065957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:10.029 17:08:16 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:10.029 17:08:16 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:44:10.029 17:08:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:10.029 17:08:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:10.029 17:08:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:10.029 17:08:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:10.029 17:08:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:10.029 17:08:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:10.289 [2024-11-05 17:08:17.242034] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:10.289 nvme0n1 00:44:10.289 17:08:17 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:10.289 17:08:17 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:10.289 17:08:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:10.289 17:08:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:10.289 17:08:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:10.289 17:08:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.549 17:08:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:10.549 17:08:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:10.549 17:08:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:10.549 17:08:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:10.549 17:08:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.549 17:08:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:10.549 17:08:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.809 17:08:17 keyring_linux -- keyring/linux.sh@25 -- # sn=108805769 00:44:10.809 17:08:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:10.809 17:08:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:10.810 17:08:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 108805769 == \1\0\8\8\0\5\7\6\9 ]] 00:44:10.810 17:08:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 108805769 00:44:10.810 17:08:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:10.810 17:08:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:10.810 Running I/O for 1 seconds... 00:44:12.013 5323.00 IOPS, 20.79 MiB/s 00:44:12.014 Latency(us) 00:44:12.014 [2024-11-05T16:08:19.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:12.014 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:12.014 nvme0n1 : 1.05 5171.98 20.20 0.00 0.00 24512.72 1884.16 86507.52 00:44:12.014 [2024-11-05T16:08:19.077Z] =================================================================================================================== 00:44:12.014 [2024-11-05T16:08:19.077Z] Total : 5171.98 20.20 0.00 0.00 24512.72 1884.16 86507.52 00:44:12.014 { 00:44:12.014 "results": [ 00:44:12.014 { 00:44:12.014 "job": "nvme0n1", 00:44:12.014 "core_mask": "0x2", 00:44:12.014 "workload": "randread", 00:44:12.014 "status": "finished", 00:44:12.014 "queue_depth": 128, 00:44:12.014 "io_size": 4096, 00:44:12.014 "runtime": 1.053948, 00:44:12.014 "iops": 5171.9819194115835, 00:44:12.014 "mibps": 20.203054372701498, 00:44:12.014 "io_failed": 0, 00:44:12.014 "io_timeout": 0, 00:44:12.014 "avg_latency_us": 24512.72026906378, 00:44:12.014 "min_latency_us": 1884.16, 00:44:12.014 "max_latency_us": 86507.52 00:44:12.014 } 00:44:12.014 ], 00:44:12.014 "core_count": 1 00:44:12.014 } 00:44:12.014 17:08:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:12.014 17:08:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:12.014 17:08:19 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:12.014 17:08:19 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:12.014 17:08:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:12.014 17:08:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:12.014 17:08:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:12.014 17:08:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:12.275 17:08:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:12.275 17:08:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:12.275 17:08:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:12.275 17:08:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:12.275 17:08:19 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:12.275 17:08:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:12.536 [2024-11-05 17:08:19.384320] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:12.536 [2024-11-05 17:08:19.385054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b16480 (107): Transport endpoint is not connected 00:44:12.536 [2024-11-05 17:08:19.386050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b16480 (9): Bad file descriptor 00:44:12.536 [2024-11-05 17:08:19.387052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:12.536 [2024-11-05 17:08:19.387060] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:12.536 [2024-11-05 17:08:19.387066] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:12.536 [2024-11-05 17:08:19.387072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:12.536 request: 00:44:12.536 { 00:44:12.536 "name": "nvme0", 00:44:12.536 "trtype": "tcp", 00:44:12.536 "traddr": "127.0.0.1", 00:44:12.536 "adrfam": "ipv4", 00:44:12.536 "trsvcid": "4420", 00:44:12.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:12.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:12.536 "prchk_reftag": false, 00:44:12.536 "prchk_guard": false, 00:44:12.536 "hdgst": false, 00:44:12.536 "ddgst": false, 00:44:12.536 "psk": ":spdk-test:key1", 00:44:12.536 "allow_unrecognized_csi": false, 00:44:12.536 "method": "bdev_nvme_attach_controller", 00:44:12.536 "req_id": 1 00:44:12.536 } 00:44:12.536 Got JSON-RPC error response 00:44:12.536 response: 00:44:12.536 { 00:44:12.536 "code": -5, 00:44:12.536 "message": "Input/output error" 00:44:12.536 } 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@33 -- # sn=108805769 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 108805769 00:44:12.536 1 links removed 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@33 -- # sn=752303045 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 752303045 00:44:12.536 1 links removed 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3513705 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3513705 ']' 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3513705 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3513705 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3513705' 00:44:12.536 killing process with pid 3513705 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@971 -- # kill 3513705 00:44:12.536 Received shutdown signal, test time was about 1.000000 seconds 00:44:12.536 00:44:12.536 Latency(us) 00:44:12.536 [2024-11-05T16:08:19.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:12.536 [2024-11-05T16:08:19.599Z] =================================================================================================================== 00:44:12.536 [2024-11-05T16:08:19.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@976 -- # wait 3513705 00:44:12.536 17:08:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3513676 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3513676 ']' 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3513676 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:12.536 17:08:19 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3513676 00:44:12.799 17:08:19 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:12.799 17:08:19 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:12.799 17:08:19 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3513676' 00:44:12.799 killing process with pid 3513676 00:44:12.799 17:08:19 keyring_linux -- common/autotest_common.sh@971 -- # kill 3513676 00:44:12.799 17:08:19 keyring_linux -- common/autotest_common.sh@976 -- # wait 3513676 00:44:12.799 00:44:12.799 real 0m4.726s 00:44:12.799 user 0m9.473s 00:44:12.799 sys 0m1.100s 00:44:13.060 17:08:19 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:13.060 17:08:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:13.060 ************************************ 00:44:13.060 END TEST keyring_linux 00:44:13.060 ************************************ 00:44:13.060 17:08:19 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:13.060 17:08:19 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:44:13.060 17:08:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:13.060 17:08:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:13.060 17:08:19 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:44:13.060 17:08:19 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:44:13.060 17:08:19 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:44:13.060 17:08:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:13.060 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:44:13.060 17:08:19 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:44:13.060 17:08:19 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:44:13.060 17:08:19 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:44:13.060 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:44:21.202 INFO: APP EXITING 00:44:21.202 INFO: killing all VMs 00:44:21.202 INFO: killing vhost app 00:44:21.202 INFO: EXIT DONE 00:44:23.751 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:44:23.751 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:44:23.751 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:65:00.0 (144d a80a): Already using the nvme driver 00:44:24.013 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:44:24.013 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:44:24.274 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:44:24.274 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:44:24.274 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:44:27.578 Cleaning 00:44:27.578 Removing: /var/run/dpdk/spdk0/config 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:27.578 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:27.578 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:27.578 Removing: /var/run/dpdk/spdk1/config 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:27.578 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:27.578 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:27.578 Removing: /var/run/dpdk/spdk2/config 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:27.578 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:27.578 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:27.578 Removing: /var/run/dpdk/spdk3/config 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:27.578 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:27.578 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:27.578 Removing: /var/run/dpdk/spdk4/config 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:27.578 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:27.578 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:27.578 Removing: /dev/shm/bdev_svc_trace.1 00:44:27.578 Removing: /dev/shm/nvmf_trace.0 00:44:27.578 Removing: /dev/shm/spdk_tgt_trace.pid2934049 00:44:27.578 Removing: /var/run/dpdk/spdk0 00:44:27.578 Removing: /var/run/dpdk/spdk1 00:44:27.578 Removing: /var/run/dpdk/spdk2 00:44:27.578 Removing: /var/run/dpdk/spdk3 00:44:27.578 Removing: /var/run/dpdk/spdk4 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2932555 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2934049 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2934798 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2935937 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2936164 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2937348 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2937413 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2937819 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2938959 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2939631 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2939986 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2940327 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2940628 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2941022 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2941385 00:44:27.578 Removing: /var/run/dpdk/spdk_pid2941584 00:44:27.579 Removing: /var/run/dpdk/spdk_pid2941826 00:44:27.579 Removing: /var/run/dpdk/spdk_pid2942887 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2946456 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2946824 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2947190 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2947196 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2947590 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2947746 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2948275 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2948328 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2948665 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2948840 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2949023 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2949307 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2949804 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2950133 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2950415 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2955110 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2960517 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2973125 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2973831 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2979055 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2979541 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2984683 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2991792 00:44:27.840 Removing: /var/run/dpdk/spdk_pid2994887 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3007646 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3018770 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3021458 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3022487 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3043537 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3048336 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3104568 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3111061 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3118202 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3126053 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3126151 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3127181 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3128290 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3129652 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3130429 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3130437 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3130768 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3130784 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3130862 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3131937 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3132961 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3134053 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3134698 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3134791 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3135116 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3136508 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3137725 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3147730 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3183925 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3189493 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3191378 00:44:27.840 Removing: /var/run/dpdk/spdk_pid3193629 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3193650 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3193891 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3194004 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3194714 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3196731 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3197811 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3198192 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3200896 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3201325 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3202302 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3207069 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3214361 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3214362 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3214363 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3219065 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3229352 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3234196 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3241436 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3243048 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3244832 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3246539 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3252095 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3257567 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3262601 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3272333 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3272335 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3277527 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3277741 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3278070 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3278545 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3278646 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3284145 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3284938 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3290225 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3293527 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3300252 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3306809 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3316763 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3326124 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3326177 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3350159 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3350845 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3351522 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3352202 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3353263 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3353955 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3354712 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3355592 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3360708 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3361002 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3368106 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3368482 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3375143 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3380590 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3392311 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3392981 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3398054 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3398441 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3403642 00:44:28.102 Removing: /var/run/dpdk/spdk_pid3410534 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3413616 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3425929 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3437106 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3439107 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3440120 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3459781 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3464305 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3467620 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3474855 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3474969 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3481174 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3483834 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3486158 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3487556 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3489999 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3491282 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3501332 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3501922 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3502585 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3505374 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3505874 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3506543 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3511357 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3511425 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3513236 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3513676 00:44:28.363 Removing: /var/run/dpdk/spdk_pid3513705 00:44:28.363 Clean 00:44:28.364 17:08:35 -- common/autotest_common.sh@1451 -- # return 0 00:44:28.364 17:08:35 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:44:28.364 17:08:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:28.364 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:44:28.364 17:08:35 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:44:28.364 17:08:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:28.364 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:44:28.625 17:08:35 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:28.625 17:08:35 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:28.625 17:08:35 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:28.625 17:08:35 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:44:28.625 17:08:35 -- spdk/autotest.sh@394 -- # hostname 00:44:28.625 17:08:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:28.625 geninfo: WARNING: invalid characters removed from testname! 00:44:55.206 17:09:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:56.148 17:09:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:58.691 17:09:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:00.075 17:09:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:01.986 17:09:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:03.406 17:09:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:05.413 17:09:11 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:05.413 17:09:11 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:05.413 17:09:11 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:05.413 17:09:11 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:05.413 17:09:11 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:05.413 17:09:11 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:05.413 + [[ -n 2847531 ]] 00:45:05.413 + sudo kill 2847531 00:45:05.424 [Pipeline] } 00:45:05.439 [Pipeline] // stage 00:45:05.444 [Pipeline] } 00:45:05.458 [Pipeline] // timeout 00:45:05.463 [Pipeline] } 00:45:05.477 [Pipeline] // catchError 00:45:05.482 [Pipeline] } 00:45:05.496 [Pipeline] // wrap 00:45:05.503 [Pipeline] } 00:45:05.516 [Pipeline] // catchError 00:45:05.525 [Pipeline] stage 00:45:05.528 [Pipeline] { (Epilogue) 00:45:05.541 [Pipeline] catchError 00:45:05.542 [Pipeline] { 00:45:05.555 [Pipeline] echo 00:45:05.557 Cleanup processes 00:45:05.564 [Pipeline] sh 00:45:05.853 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:05.853 3527204 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:05.868 [Pipeline] sh 00:45:06.159 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:06.159 ++ grep -v 'sudo pgrep' 00:45:06.159 ++ awk '{print $1}' 00:45:06.159 + sudo kill -9 00:45:06.159 + true 00:45:06.202 [Pipeline] sh 00:45:06.492 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:18.745 [Pipeline] sh 00:45:19.035 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:19.035 Artifacts sizes are good 00:45:19.051 [Pipeline] archiveArtifacts 00:45:19.059 Archiving artifacts 00:45:19.196 [Pipeline] sh 00:45:19.485 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:19.501 [Pipeline] cleanWs 00:45:19.512 [WS-CLEANUP] Deleting project workspace... 00:45:19.512 [WS-CLEANUP] Deferred wipeout is used... 00:45:19.520 [WS-CLEANUP] done 00:45:19.522 [Pipeline] } 00:45:19.539 [Pipeline] // catchError 00:45:19.552 [Pipeline] sh 00:45:19.839 + logger -p user.info -t JENKINS-CI 00:45:19.850 [Pipeline] } 00:45:19.863 [Pipeline] // stage 00:45:19.868 [Pipeline] } 00:45:19.882 [Pipeline] // node 00:45:19.887 [Pipeline] End of Pipeline 00:45:19.917 Finished: SUCCESS